Skip to content title help for search log in ENABLE AUTO REFRESH Jenkins CDH5.3.x-HBase-0.98.6 #10 Test Results Test Results org.apache.hadoop.hbase.regionserver TestRegionServerMetrics testMobMetrics Back to Project Status Changes Console Output [raw] Edit Build Information History Polling Log Environment Variables Git Build Data Test Result See Fingerprints Previous Build Regression org.apache.hadoop.hbase.regionserver.TestRegionServerMetrics.testMobMetrics Failing for the past 1 build (Since Failed#10 ) Took 30 ms. add description Error Message Metrics Counters should be equal expected:<5> but was:<2> Stacktrace java.lang.AssertionError: Metrics Counters should be equal expected:<5> but was:<2> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.apache.hadoop.hbase.test.MetricsAssertHelperImpl.assertCounter(MetricsAssertHelperImpl.java:185) at org.apache.hadoop.hbase.regionserver.TestRegionServerMetrics.testMobMetrics(TestRegionServerMetrics.java:448) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.junit.runners.Suite.runChild(Suite.java:127) at org.junit.runners.Suite.runChild(Suite.java:26) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Standard Output Formatting using clusterid: testClusterID Standard Error 2014-11-19 12:41:35,810 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(840): Starting up minicluster with 1 master(s) and 1 regionserver(s) and 1 datanode(s) 2014-11-19 12:41:35,877 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(390): Created new mini-cluster data directory: /var/lib/jenkins/workspace/CDH5.3.x-HBase-0.98.6/hbase-server/target/test-data/b19e18f6-c3f4-494d-84f5-e67669e3e32e/dfscluster_0d554cfb-badb-4752-9801-28d678437b4d, deleteOnExit=true 2014-11-19 12:41:35,878 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(627): Setting test.cache.data to /var/lib/jenkins/workspace/CDH5.3.x-HBase-0.98.6/hbase-server/target/test-data/b19e18f6-c3f4-494d-84f5-e67669e3e32e/cache_data in system properties and HBase conf 2014-11-19 12:41:35,878 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(627): Setting hadoop.tmp.dir to /var/lib/jenkins/workspace/CDH5.3.x-HBase-0.98.6/hbase-server/target/test-data/b19e18f6-c3f4-494d-84f5-e67669e3e32e/hadoop_tmp in system properties and HBase conf 2014-11-19 12:41:35,879 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(627): Setting hadoop.log.dir to /var/lib/jenkins/workspace/CDH5.3.x-HBase-0.98.6/hbase-server/target/test-data/b19e18f6-c3f4-494d-84f5-e67669e3e32e/hadoop_logs in system properties and HBase conf 2014-11-19 12:41:35,879 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(627): Setting mapred.local.dir to /var/lib/jenkins/workspace/CDH5.3.x-HBase-0.98.6/hbase-server/target/test-data/b19e18f6-c3f4-494d-84f5-e67669e3e32e/mapred_local in system properties and HBase conf 2014-11-19 12:41:35,880 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(627): Setting mapred.temp.dir to /var/lib/jenkins/workspace/CDH5.3.x-HBase-0.98.6/hbase-server/target/test-data/b19e18f6-c3f4-494d-84f5-e67669e3e32e/mapred_temp in system properties and HBase conf 2014-11-19 12:41:35,880 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(618): read short circuit is OFF 2014-11-19 12:41:36,188 WARN [pool-1-thread-1] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2014-11-19 12:41:36,530 DEBUG [pool-1-thread-1] fs.HFileSystem(236): The file system is not a DistributedFileSystem. Skipping on block location reordering 2014-11-19 12:41:38,244 WARN [pool-1-thread-1] impl.MetricsConfig(124): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2014-11-19 12:41:38,507 INFO [pool-1-thread-1] log.Slf4jLog(67): Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2014-11-19 12:41:38,619 INFO [pool-1-thread-1] log.Slf4jLog(67): jetty-6.1.26.cloudera.4 2014-11-19 12:41:38,657 INFO [pool-1-thread-1] log.Slf4jLog(67): Extract jar:file:/var/lib/jenkins/workspace/CDH5.3.x-HBase-0.98.6/.repository/org/apache/hadoop/hadoop-hdfs/2.5.0-cdh5.3.0-SNAPSHOT/hadoop-hdfs-2.5.0-cdh5.3.0-SNAPSHOT-tests.jar!/webapps/hdfs to /tmp/Jetty_localhost_58574_hdfs____.l1avwn/webapp 2014-11-19 12:41:38,985 INFO [pool-1-thread-1] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:58574 2014-11-19 12:41:39,994 INFO [pool-1-thread-1] log.Slf4jLog(67): jetty-6.1.26.cloudera.4 2014-11-19 12:41:40,003 INFO [pool-1-thread-1] log.Slf4jLog(67): Extract jar:file:/var/lib/jenkins/workspace/CDH5.3.x-HBase-0.98.6/.repository/org/apache/hadoop/hadoop-hdfs/2.5.0-cdh5.3.0-SNAPSHOT/hadoop-hdfs-2.5.0-cdh5.3.0-SNAPSHOT-tests.jar!/webapps/datanode to /tmp/Jetty_localhost_58527_datanode____.t3123m/webapp 2014-11-19 12:41:40,205 INFO [pool-1-thread-1] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:58527 2014-11-19 12:41:41,279 INFO [IPC Server handler 6 on 45640] blockmanagement.BlockManager(1786): BLOCK* processReport: from storage DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04 node DatanodeRegistration(127.0.0.1, datanodeUuid=b2b843d7-a7fe-4455-9091-32adc9f9f1f8, infoPort=58527, ipcPort=59595, storageInfo=lv=-56;cid=testClusterID;nsid=399688093;c=0), blocks: 0, hasStaleStorages: true, processing time: 2 msecs 2014-11-19 12:41:41,280 INFO [IPC Server handler 6 on 45640] blockmanagement.BlockManager(1786): BLOCK* processReport: from storage DS-e87da309-3799-4e45-ab72-12963622a667 node DatanodeRegistration(127.0.0.1, datanodeUuid=b2b843d7-a7fe-4455-9091-32adc9f9f1f8, infoPort=58527, ipcPort=59595, storageInfo=lv=-56;cid=testClusterID;nsid=399688093;c=0), blocks: 0, hasStaleStorages: false, processing time: 0 msecs 2014-11-19 12:41:41,460 INFO [pool-1-thread-1] zookeeper.MiniZooKeeperCluster(200): Started MiniZK Cluster and connect 1 ZK server on client port: 64128 2014-11-19 12:41:41,770 INFO [IPC Server handler 3 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741825_1001{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-e87da309-3799-4e45-ab72-12963622a667:NORMAL|RBW]]} size 7 2014-11-19 12:41:42,181 DEBUG [pool-1-thread-1] util.FSUtils(671): Created version file at hdfs://localhost:45640/user/jenkins/hbase with version=8 2014-11-19 12:41:42,241 DEBUG [pool-1-thread-1] client.HConnectionManager(2817): master/p0120.sjc.cloudera.com/172.17.188.30:0 HConnection server-to-server retries=350 2014-11-19 12:41:42,468 INFO [pool-1-thread-1] master.HMaster(465): hbase.rootdir=hdfs://localhost:45640/user/jenkins/hbase, hbase.cluster.distributed=false 2014-11-19 12:41:42,483 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(119): Process identifier=master:33095 connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:42,676 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:42,680 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(387): master:33095-0x149c9ca09350000 connected 2014-11-19 12:41:42,869 DEBUG [pool-1-thread-1] client.HConnectionManager(2817): regionserver/p0120.sjc.cloudera.com/172.17.188.30:0 HConnection server-to-server retries=350 2014-11-19 12:41:42,946 INFO [pool-1-thread-1] ipc.SimpleRpcScheduler(123): Using deadline as user call queue, count=1 2014-11-19 12:41:42,959 INFO [pool-1-thread-1] hfile.CacheConfig(448): Allocating LruBlockCache with maximum size 675.6 M 2014-11-19 12:41:42,967 INFO [pool-1-thread-1] mob.MobFileCache(121): MobFileCache is initialized, and the cache size is 1000 2014-11-19 12:41:42,981 INFO [pool-1-thread-1] log.Slf4jLog(67): jetty-6.1.26.cloudera.4 2014-11-19 12:41:43,025 INFO [pool-1-thread-1] log.Slf4jLog(67): Started HttpServer$SelectChannelConnectorWithSafeStartup@0.0.0.0:48408 2014-11-19 12:41:43,037 DEBUG [M:0;p0120:33095] zookeeper.ZKUtil(430): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2014-11-19 12:41:43,040 DEBUG [M:0;p0120:33095] zookeeper.ZKUtil(430): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2014-11-19 12:41:43,057 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2014-11-19 12:41:43,060 DEBUG [M:0;p0120:33095] zookeeper.ZKUtil(428): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2014-11-19 12:41:43,062 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(428): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2014-11-19 12:41:43,062 WARN [M:0;p0120:33095] hbase.ZNodeClearer(58): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2014-11-19 12:41:43,062 DEBUG [pool-1-thread-1-EventThread] master.ActiveMasterManager(119): A master is now available 2014-11-19 12:41:43,063 INFO [M:0;p0120:33095] master.ActiveMasterManager(170): Registered Active Master=p0120.sjc.cloudera.com,33095,1416429702460 2014-11-19 12:41:43,135 INFO [pool-1-thread-1] regionserver.ShutdownHook(87): Installed shutdown hook thread: Shutdownhook:RS:0;p0120:56624 2014-11-19 12:41:43,139 INFO [RS:0;p0120:56624] zookeeper.RecoverableZooKeeper(119): Process identifier=regionserver:56624 connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:43,281 INFO [IPC Server handler 2 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741826_1002{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-e87da309-3799-4e45-ab72-12963622a667:NORMAL|RBW]]} size 0 2014-11-19 12:41:43,285 DEBUG [M:0;p0120:33095] util.FSUtils(823): Created cluster ID file at hdfs://localhost:45640/user/jenkins/hbase/hbase.id with ID: 9473feea-de6b-47e6-8fed-14ba3e22271e 2014-11-19 12:41:43,286 DEBUG [RS:0;p0120:56624-EventThread] zookeeper.ZooKeeperWatcher(310): regionserver:56624, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:43,286 DEBUG [RS:0;p0120:56624] zookeeper.ZKUtil(428): regionserver:56624, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2014-11-19 12:41:43,287 DEBUG [RS:0;p0120:56624-EventThread] zookeeper.ZooKeeperWatcher(387): regionserver:56624-0x149c9ca09350001 connected 2014-11-19 12:41:43,288 DEBUG [RS:0;p0120:56624] zookeeper.ZKUtil(430): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2014-11-19 12:41:43,365 INFO [M:0;p0120:33095] master.MasterFileSystem(526): BOOTSTRAP: creating hbase:meta region 2014-11-19 12:41:43,426 INFO [M:0;p0120:33095] regionserver.HRegion(4377): creating HRegion hbase:meta HTD == 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '8192', IN_MEMORY => 'false', BLOCKCACHE => 'false'} RootDir = hdfs://localhost:45640/user/jenkins/hbase Table name == hbase:meta 2014-11-19 12:41:43,483 INFO [IPC Server handler 3 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741827_1003{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|FINALIZED]]} size 0 2014-11-19 12:41:43,495 INFO [M:0;p0120:33095] wal.FSHLog(402): WAL/HLog configuration: blocksize=128 MB, rollsize=121.60 MB, enabled=true 2014-11-19 12:41:43,585 INFO [M:0;p0120:33095] wal.FSHLog(584): New WAL /user/jenkins/hbase/data/hbase/meta/1588230740/WALs/hlog.1416429703503 2014-11-19 12:41:43,604 DEBUG [M:0;p0120:33095] regionserver.HRegion(641): Instantiated hbase:meta,,1.1588230740 2014-11-19 12:41:43,684 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(88): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2014-11-19 12:41:43,688 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(193): No StoreFiles for: hdfs://localhost:45640/user/jenkins/hbase/data/hbase/meta/1588230740/info 2014-11-19 12:41:43,695 INFO [StoreOpener-1588230740-1] util.ChecksumType$2(68): Checksum using org.apache.hadoop.util.PureJavaCrc32 2014-11-19 12:41:43,695 INFO [StoreOpener-1588230740-1] util.ChecksumType$3(111): Checksum can use org.apache.hadoop.util.PureJavaCrc32C 2014-11-19 12:41:43,702 DEBUG [M:0;p0120:33095] regionserver.HRegion(3192): Found 0 recovered edits file(s) under hdfs://localhost:45640/user/jenkins/hbase/data/hbase/meta/1588230740 2014-11-19 12:41:43,708 INFO [M:0;p0120:33095] regionserver.HRegion(742): Onlined 1588230740; next sequenceid=1 2014-11-19 12:41:43,708 DEBUG [M:0;p0120:33095] regionserver.HRegion(1111): Closing hbase:meta,,1.1588230740: disabling compactions & flushes 2014-11-19 12:41:43,709 DEBUG [M:0;p0120:33095] regionserver.HRegion(1138): Updates disabled for region hbase:meta,,1.1588230740 2014-11-19 12:41:43,710 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1] regionserver.HStore(788): Closed info 2014-11-19 12:41:43,711 INFO [M:0;p0120:33095] regionserver.HRegion(1220): Closed hbase:meta,,1.1588230740 2014-11-19 12:41:43,711 DEBUG [M:0;p0120:33095-WAL.AsyncNotifier] wal.FSHLog$AsyncNotifier(1351): M:0;p0120:33095-WAL.AsyncNotifier interrupted while waiting for notification from AsyncSyncer thread 2014-11-19 12:41:43,712 INFO [M:0;p0120:33095-WAL.AsyncNotifier] wal.FSHLog$AsyncNotifier(1356): M:0;p0120:33095-WAL.AsyncNotifier exiting 2014-11-19 12:41:43,712 DEBUG [M:0;p0120:33095-WAL.AsyncSyncer0] wal.FSHLog$AsyncSyncer(1297): M:0;p0120:33095-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread 2014-11-19 12:41:43,712 INFO [M:0;p0120:33095-WAL.AsyncSyncer0] wal.FSHLog$AsyncSyncer(1302): M:0;p0120:33095-WAL.AsyncSyncer0 exiting 2014-11-19 12:41:43,712 DEBUG [M:0;p0120:33095-WAL.AsyncSyncer1] wal.FSHLog$AsyncSyncer(1297): M:0;p0120:33095-WAL.AsyncSyncer1 interrupted while waiting for notification from AsyncWriter thread 2014-11-19 12:41:43,712 INFO [M:0;p0120:33095-WAL.AsyncSyncer1] wal.FSHLog$AsyncSyncer(1302): M:0;p0120:33095-WAL.AsyncSyncer1 exiting 2014-11-19 12:41:43,713 DEBUG [M:0;p0120:33095-WAL.AsyncSyncer2] wal.FSHLog$AsyncSyncer(1297): M:0;p0120:33095-WAL.AsyncSyncer2 interrupted while waiting for notification from AsyncWriter thread 2014-11-19 12:41:43,713 INFO [M:0;p0120:33095-WAL.AsyncSyncer2] wal.FSHLog$AsyncSyncer(1302): M:0;p0120:33095-WAL.AsyncSyncer2 exiting 2014-11-19 12:41:43,713 DEBUG [M:0;p0120:33095-WAL.AsyncSyncer3] wal.FSHLog$AsyncSyncer(1297): M:0;p0120:33095-WAL.AsyncSyncer3 interrupted while waiting for notification from AsyncWriter thread 2014-11-19 12:41:43,713 INFO [M:0;p0120:33095-WAL.AsyncSyncer3] wal.FSHLog$AsyncSyncer(1302): M:0;p0120:33095-WAL.AsyncSyncer3 exiting 2014-11-19 12:41:43,713 DEBUG [M:0;p0120:33095-WAL.AsyncSyncer4] wal.FSHLog$AsyncSyncer(1297): M:0;p0120:33095-WAL.AsyncSyncer4 interrupted while waiting for notification from AsyncWriter thread 2014-11-19 12:41:43,714 INFO [M:0;p0120:33095-WAL.AsyncSyncer4] wal.FSHLog$AsyncSyncer(1302): M:0;p0120:33095-WAL.AsyncSyncer4 exiting 2014-11-19 12:41:43,714 DEBUG [M:0;p0120:33095-WAL.AsyncWriter] wal.FSHLog$AsyncWriter(1163): M:0;p0120:33095-WAL.AsyncWriter interrupted while waiting for newer writes added to local buffer 2014-11-19 12:41:43,714 INFO [M:0;p0120:33095-WAL.AsyncWriter] wal.FSHLog$AsyncWriter(1168): M:0;p0120:33095-WAL.AsyncWriter exiting 2014-11-19 12:41:43,714 DEBUG [M:0;p0120:33095] wal.FSHLog(948): Closing WAL writer in hdfs://localhost:45640/user/jenkins/hbase/data/hbase/meta/1588230740/WALs 2014-11-19 12:41:43,719 INFO [IPC Server handler 0 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741828_1004{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|RBW]]} size 91 2014-11-19 12:41:44,135 DEBUG [M:0;p0120:33095] wal.FSHLog(892): Moved 1 WAL file(s) to /user/jenkins/hbase/data/hbase/meta/1588230740/oldWALs 2014-11-19 12:41:44,212 INFO [IPC Server handler 3 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741829_1005{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|RBW]]} size 0 2014-11-19 12:41:44,244 DEBUG [M:0;p0120:33095] util.FSTableDescriptors(651): Wrote descriptor into: hdfs://localhost:45640/user/jenkins/hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2014-11-19 12:41:44,286 INFO [M:0;p0120:33095] fs.HFileSystem(267): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2014-11-19 12:41:44,296 DEBUG [M:0;p0120:33095] master.SplitLogManager(1346): Distributed log replay=false, hfile.format.version=3 2014-11-19 12:41:44,304 INFO [M:0;p0120:33095] master.SplitLogManager(224): Timeout=120000, unassigned timeout=180000, distributedLogReplay=false 2014-11-19 12:41:44,306 INFO [M:0;p0120:33095] master.SplitLogManager(1101): Found 0 orphan tasks and 0 rescan nodes 2014-11-19 12:41:44,306 DEBUG [M:0;p0120:33095] util.FSTableDescriptors(198): Fetching table descriptors from the filesystem. 2014-11-19 12:41:44,370 INFO [M:0;p0120:33095] zookeeper.RecoverableZooKeeper(119): Process identifier=hconnection-0x22e54092 connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:44,390 DEBUG [M:0;p0120:33095-EventThread] zookeeper.ZooKeeperWatcher(310): hconnection-0x22e54092, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:44,392 DEBUG [M:0;p0120:33095-EventThread] zookeeper.ZooKeeperWatcher(387): hconnection-0x22e54092-0x149c9ca09350002 connected 2014-11-19 12:41:44,411 DEBUG [M:0;p0120:33095] catalog.CatalogTracker(197): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@2e095719 2014-11-19 12:41:44,412 DEBUG [M:0;p0120:33095] zookeeper.ZKUtil(430): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/meta-region-server 2014-11-19 12:41:44,429 DEBUG [M:0;p0120:33095] zookeeper.ZKUtil(430): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2014-11-19 12:41:44,465 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2014-11-19 12:41:44,465 DEBUG [RS:0;p0120:56624-EventThread] zookeeper.ZooKeeperWatcher(310): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2014-11-19 12:41:44,466 INFO [M:0;p0120:33095] master.HMaster(713): Server active/primary master=p0120.sjc.cloudera.com,33095,1416429702460, sessionid=0x149c9ca09350000, setting cluster-up flag (Was=false) 2014-11-19 12:41:44,470 INFO [RS:0;p0120:56624] zookeeper.RecoverableZooKeeper(119): Process identifier=hconnection-0x3e380b27 connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:44,482 DEBUG [RS:0;p0120:56624-EventThread] zookeeper.ZooKeeperWatcher(310): hconnection-0x3e380b27, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:44,483 DEBUG [RS:0;p0120:56624-EventThread] zookeeper.ZooKeeperWatcher(387): hconnection-0x3e380b27-0x149c9ca09350003 connected 2014-11-19 12:41:44,484 DEBUG [RS:0;p0120:56624] catalog.CatalogTracker(197): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@73b4ae13 2014-11-19 12:41:44,485 DEBUG [RS:0;p0120:56624] zookeeper.ZKUtil(430): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/meta-region-server 2014-11-19 12:41:44,487 INFO [RS:0;p0120:56624] regionserver.HRegionServer(777): ClusterId : 9473feea-de6b-47e6-8fed-14ba3e22271e 2014-11-19 12:41:44,491 INFO [RS:0;p0120:56624] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot is initializing 2014-11-19 12:41:44,523 INFO [RS:0;p0120:56624] zookeeper.RecoverableZooKeeper(529): Node /hbase/online-snapshot already exists and this is not a retry 2014-11-19 12:41:44,539 INFO [RS:0;p0120:56624] zookeeper.RecoverableZooKeeper(529): Node /hbase/online-snapshot/acquired already exists and this is not a retry 2014-11-19 12:41:44,558 INFO [RS:0;p0120:56624] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot is initialized 2014-11-19 12:41:44,561 INFO [RS:0;p0120:56624] regionserver.MemStoreFlusher(119): globalMemStoreLimit=675.6 M, globalMemStoreLimitLowMark=641.8 M, maxHeap=1.6 G 2014-11-19 12:41:44,565 INFO [M:0;p0120:33095] zookeeper.RecoverableZooKeeper(529): Node /hbase/online-snapshot/abort already exists and this is not a retry 2014-11-19 12:41:44,565 INFO [M:0;p0120:33095] procedure.ZKProcedureUtil(271): Clearing all procedure znodes: /hbase/online-snapshot/acquired /hbase/online-snapshot/reached /hbase/online-snapshot/abort 2014-11-19 12:41:44,565 INFO [RS:0;p0120:56624] regionserver.HRegionServer$CompactionChecker(1448): CompactionChecker runs every 1sec 2014-11-19 12:41:44,568 DEBUG [M:0;p0120:33095] procedure.ZKProcedureCoordinatorRpcs(195): Starting the controller for procedure member:p0120.sjc.cloudera.com,33095,1416429702460 2014-11-19 12:41:44,573 INFO [RS:0;p0120:56624] regionserver.HRegionServer(2068): reportForDuty to master=p0120.sjc.cloudera.com,33095,1416429702460 with port=56624, startcode=1416429702954 2014-11-19 12:41:44,609 DEBUG [M:0;p0120:33095] executor.ExecutorService(99): Starting executor service name=MASTER_OPEN_REGION-p0120:33095, corePoolSize=5, maxPoolSize=5 2014-11-19 12:41:44,609 DEBUG [M:0;p0120:33095] executor.ExecutorService(99): Starting executor service name=MASTER_CLOSE_REGION-p0120:33095, corePoolSize=5, maxPoolSize=5 2014-11-19 12:41:44,610 DEBUG [M:0;p0120:33095] executor.ExecutorService(99): Starting executor service name=MASTER_SERVER_OPERATIONS-p0120:33095, corePoolSize=5, maxPoolSize=5 2014-11-19 12:41:44,610 DEBUG [M:0;p0120:33095] executor.ExecutorService(99): Starting executor service name=MASTER_META_SERVER_OPERATIONS-p0120:33095, corePoolSize=5, maxPoolSize=5 2014-11-19 12:41:44,611 DEBUG [M:0;p0120:33095] executor.ExecutorService(99): Starting executor service name=M_LOG_REPLAY_OPS-p0120:33095, corePoolSize=10, maxPoolSize=10 2014-11-19 12:41:44,611 DEBUG [M:0;p0120:33095] executor.ExecutorService(99): Starting executor service name=MASTER_TABLE_OPERATIONS-p0120:33095, corePoolSize=1, maxPoolSize=1 2014-11-19 12:41:44,614 DEBUG [M:0;p0120:33095] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2014-11-19 12:41:44,617 INFO [M:0;p0120:33095] zookeeper.RecoverableZooKeeper(119): Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:44,623 DEBUG [M:0;p0120:33095-EventThread] zookeeper.ZooKeeperWatcher(310): replicationLogCleaner, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:44,626 DEBUG [M:0;p0120:33095-EventThread] zookeeper.ZooKeeperWatcher(387): replicationLogCleaner-0x149c9ca09350004 connected 2014-11-19 12:41:44,669 DEBUG [M:0;p0120:33095] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2014-11-19 12:41:44,674 DEBUG [M:0;p0120:33095] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotLogCleaner 2014-11-19 12:41:44,677 DEBUG [M:0;p0120:33095] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2014-11-19 12:41:44,679 DEBUG [M:0;p0120:33095] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2014-11-19 12:41:44,679 DEBUG [M:0;p0120:33095] cleaner.CleanerChore(91): initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2014-11-19 12:41:44,680 INFO [M:0;p0120:33095] master.ServerManager(867): Waiting for region servers count to settle; currently checked in 0, slept for 0 ms, expecting minimum of 1, maximum of 1, timeout of 4500 ms, interval of 1500 ms. 2014-11-19 12:41:44,684 DEBUG [RS:0;p0120:56624] regionserver.HRegionServer(2084): Master is not running yet 2014-11-19 12:41:44,684 WARN [RS:0;p0120:56624] regionserver.HRegionServer(876): reportForDuty failed; sleeping and then retrying. 2014-11-19 12:41:45,685 INFO [RS:0;p0120:56624] regionserver.HRegionServer(2068): reportForDuty to master=p0120.sjc.cloudera.com,33095,1416429702460 with port=56624, startcode=1416429702954 2014-11-19 12:41:45,694 INFO [FifoRpcScheduler.handler1-thread-2] master.ServerManager(402): Registering server=p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:45,700 DEBUG [RS:0;p0120:56624] regionserver.HRegionServer(1281): Config from master: hbase.rootdir=hdfs://localhost:45640/user/jenkins/hbase 2014-11-19 12:41:45,700 DEBUG [RS:0;p0120:56624] regionserver.HRegionServer(1281): Config from master: fs.default.name=hdfs://localhost:45640 2014-11-19 12:41:45,701 DEBUG [RS:0;p0120:56624] regionserver.HRegionServer(1281): Config from master: hbase.master.info.port=-1 2014-11-19 12:41:45,701 INFO [M:0;p0120:33095] master.ServerManager(884): Finished waiting for region servers count to settle; checked in 1, slept for 1021 ms, expecting minimum of 1, maximum of 1, master is running. 2014-11-19 12:41:45,706 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2014-11-19 12:41:45,707 DEBUG [RS:0;p0120:56624] zookeeper.ZKUtil(428): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:45,708 WARN [RS:0;p0120:56624] hbase.ZNodeClearer(58): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2014-11-19 12:41:45,708 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZKUtil(428): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:45,711 DEBUG [pool-1-thread-1-EventThread] zookeeper.RegionServerTracker(91): RS node: /hbase/rs/p0120.sjc.cloudera.com,56624,1416429702954 data: PBUF�� 2014-11-19 12:41:45,712 INFO [RS:0;p0120:56624] fs.HFileSystem(267): Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2014-11-19 12:41:45,715 DEBUG [RS:0;p0120:56624] regionserver.HRegionServer(1545): logdir=hdfs://localhost:45640/user/jenkins/hbase/WALs/p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:45,748 DEBUG [RS:0;p0120:56624] regionserver.Replication(138): ReplicationStatisticsThread 300 2014-11-19 12:41:45,749 INFO [RS:0;p0120:56624] wal.FSHLog(402): WAL/HLog configuration: blocksize=128 MB, rollsize=121.60 MB, enabled=true 2014-11-19 12:41:45,774 INFO [RS:0;p0120:56624] wal.FSHLog(584): New WAL /user/jenkins/hbase/WALs/p0120.sjc.cloudera.com,56624,1416429702954/p0120.sjc.cloudera.com%2C56624%2C1416429702954.1416429705757 2014-11-19 12:41:45,782 INFO [RS:0;p0120:56624] regionserver.MetricsRegionServerWrapperImpl(126): Computing regionserver metrics every 5000 milliseconds 2014-11-19 12:41:45,787 DEBUG [RS:0;p0120:56624] executor.ExecutorService(99): Starting executor service name=RS_OPEN_REGION-p0120:56624, corePoolSize=3, maxPoolSize=3 2014-11-19 12:41:45,787 DEBUG [RS:0;p0120:56624] executor.ExecutorService(99): Starting executor service name=RS_OPEN_META-p0120:56624, corePoolSize=1, maxPoolSize=1 2014-11-19 12:41:45,788 DEBUG [RS:0;p0120:56624] executor.ExecutorService(99): Starting executor service name=RS_CLOSE_REGION-p0120:56624, corePoolSize=3, maxPoolSize=3 2014-11-19 12:41:45,788 DEBUG [RS:0;p0120:56624] executor.ExecutorService(99): Starting executor service name=RS_CLOSE_META-p0120:56624, corePoolSize=1, maxPoolSize=1 2014-11-19 12:41:45,788 DEBUG [RS:0;p0120:56624] executor.ExecutorService(99): Starting executor service name=RS_LOG_REPLAY_OPS-p0120:56624, corePoolSize=2, maxPoolSize=2 2014-11-19 12:41:45,792 DEBUG [RS:0;p0120:56624] zookeeper.ZKUtil(428): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:45,793 INFO [RS:0;p0120:56624] regionserver.ReplicationSourceManager(219): Current list of replicators: [p0120.sjc.cloudera.com,56624,1416429702954] other RSs: [p0120.sjc.cloudera.com,56624,1416429702954] 2014-11-19 12:41:45,834 INFO [RS:0;p0120:56624] zookeeper.RecoverableZooKeeper(119): Process identifier=hconnection-0x5b89f62c connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:45,882 DEBUG [RS:0;p0120:56624-EventThread] zookeeper.ZooKeeperWatcher(310): hconnection-0x5b89f62c, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:45,888 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): B.default Start Handler index=0 queue=0 2014-11-19 12:41:45,889 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): B.default Start Handler index=1 queue=0 2014-11-19 12:41:45,889 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): B.default Start Handler index=2 queue=0 2014-11-19 12:41:45,890 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): B.default Start Handler index=3 queue=0 2014-11-19 12:41:45,890 DEBUG [RS:0;p0120:56624-EventThread] zookeeper.ZooKeeperWatcher(387): hconnection-0x5b89f62c-0x149c9ca09350005 connected 2014-11-19 12:41:45,890 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): B.default Start Handler index=4 queue=0 2014-11-19 12:41:45,891 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Priority Start Handler index=0 queue=0 2014-11-19 12:41:45,891 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Priority Start Handler index=1 queue=0 2014-11-19 12:41:45,891 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Priority Start Handler index=2 queue=0 2014-11-19 12:41:45,892 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Priority Start Handler index=3 queue=0 2014-11-19 12:41:45,892 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Priority Start Handler index=4 queue=0 2014-11-19 12:41:45,892 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Priority Start Handler index=5 queue=0 2014-11-19 12:41:45,893 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Priority Start Handler index=6 queue=0 2014-11-19 12:41:45,893 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Priority Start Handler index=7 queue=0 2014-11-19 12:41:45,893 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Priority Start Handler index=8 queue=0 2014-11-19 12:41:45,894 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Priority Start Handler index=9 queue=0 2014-11-19 12:41:45,894 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Replication Start Handler index=0 queue=0 2014-11-19 12:41:45,895 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Replication Start Handler index=1 queue=0 2014-11-19 12:41:45,895 DEBUG [RS:0;p0120:56624] ipc.RpcExecutor(101): Replication Start Handler index=2 queue=0 2014-11-19 12:41:45,926 INFO [RS:0;p0120:56624] regionserver.HRegionServer(1317): Serving as p0120.sjc.cloudera.com,56624,1416429702954, RpcServer on p0120.sjc.cloudera.com/172.17.188.30:56624, sessionid=0x149c9ca09350001 2014-11-19 12:41:45,926 INFO [SplitLogWorker-p0120.sjc.cloudera.com,56624,1416429702954] regionserver.SplitLogWorker(176): SplitLogWorker p0120.sjc.cloudera.com,56624,1416429702954 starting 2014-11-19 12:41:45,929 INFO [RS:0;p0120:56624] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot is starting 2014-11-19 12:41:45,929 INFO [SplitLogWorker-p0120.sjc.cloudera.com,56624,1416429702954] zookeeper.RecoverableZooKeeper(119): Process identifier=hconnection-0x6804075a connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:45,929 DEBUG [RS:0;p0120:56624] snapshot.RegionServerSnapshotManager(121): Start Snapshot Manager p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:45,929 DEBUG [RS:0;p0120:56624] procedure.ZKProcedureMemberRpcs(337): Starting procedure member 'p0120.sjc.cloudera.com,56624,1416429702954' 2014-11-19 12:41:45,929 DEBUG [RS:0;p0120:56624] procedure.ZKProcedureMemberRpcs(136): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2014-11-19 12:41:45,931 DEBUG [RS:0;p0120:56624] procedure.ZKProcedureMemberRpcs(152): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2014-11-19 12:41:45,966 DEBUG [SplitLogWorker-p0120.sjc.cloudera.com,56624,1416429702954-EventThread] zookeeper.ZooKeeperWatcher(310): hconnection-0x6804075a, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:45,966 INFO [RS:0;p0120:56624] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot is started 2014-11-19 12:41:45,967 DEBUG [SplitLogWorker-p0120.sjc.cloudera.com,56624,1416429702954-EventThread] zookeeper.ZooKeeperWatcher(387): hconnection-0x6804075a-0x149c9ca09350006 connected 2014-11-19 12:41:45,973 INFO [RS:0;p0120:56624] quotas.RegionServerQuotaManager(63): Quota support disabled 2014-11-19 12:41:46,783 INFO [M:0;p0120:33095] zookeeper.MetaRegionTracker(164): Unsetting hbase:meta region location in ZooKeeper 2014-11-19 12:41:46,798 WARN [M:0;p0120:33095] zookeeper.RecoverableZooKeeper(187): Node /hbase/meta-region-server already deleted, retry=false 2014-11-19 12:41:46,801 DEBUG [M:0;p0120:33095] master.AssignmentManager(2250): No previous transition plan found (or ignoring an existing plan) for hbase:meta,,1.1588230740; generated random plan=hri=hbase:meta,,1.1588230740, src=, dest=p0120.sjc.cloudera.com,56624,1416429702954; 1 (online=1, available=1) available servers, forceNewPlan=false 2014-11-19 12:41:46,801 DEBUG [M:0;p0120:33095] zookeeper.ZKAssign(206): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Creating (or updating) unassigned node 1588230740 with OFFLINE state 2014-11-19 12:41:46,824 DEBUG [M:0;p0120:33095] master.AssignmentManager(1946): Setting table hbase:meta to ENABLED state. 2014-11-19 12:41:46,856 INFO [M:0;p0120:33095] master.AssignmentManager(1961): Assigning hbase:meta,,1.1588230740 to p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:46,857 INFO [M:0;p0120:33095] master.RegionStates(316): Transitioned {1588230740 state=OFFLINE, ts=1416429706801, server=null} to {1588230740 state=PENDING_OPEN, ts=1416429706857, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:46,857 DEBUG [M:0;p0120:33095] master.ServerManager(802): New admin connection to p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:46,886 INFO [PriorityRpcServer.handler=0,queue=0,port=56624] regionserver.HRegionServer(3770): Open hbase:meta,,1.1588230740 2014-11-19 12:41:46,891 DEBUG [RS_OPEN_META-p0120:56624-0] zookeeper.ZKAssign(832): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Transitioning 1588230740 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2014-11-19 12:41:46,896 INFO [M:0;p0120:33095] master.ServerManager(598): AssignmentManager hasn't finished failover cleanup; waiting 2014-11-19 12:41:46,914 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/region-in-transition/1588230740 2014-11-19 12:41:46,914 DEBUG [RS_OPEN_META-p0120:56624-0] zookeeper.ZKAssign(907): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Transitioned node 1588230740 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2014-11-19 12:41:46,915 DEBUG [RS_OPEN_META-p0120:56624-0] regionserver.HRegionServer(1562): logdir=hdfs://localhost:45640/user/jenkins/hbase/WALs/p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:46,916 INFO [RS_OPEN_META-p0120:56624-0] wal.FSHLog(402): WAL/HLog configuration: blocksize=128 MB, rollsize=121.60 MB, enabled=true 2014-11-19 12:41:46,918 DEBUG [AM.ZK.Worker-pool2-t1] master.AssignmentManager(814): Handling RS_ZK_REGION_OPENING, server=p0120.sjc.cloudera.com,56624,1416429702954, region=1588230740, current_state={1588230740 state=PENDING_OPEN, ts=1416429706857, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:46,919 INFO [AM.ZK.Worker-pool2-t1] master.RegionStates(316): Transitioned {1588230740 state=PENDING_OPEN, ts=1416429706857, server=p0120.sjc.cloudera.com,56624,1416429702954} to {1588230740 state=OPENING, ts=1416429706919, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:46,938 INFO [RS_OPEN_META-p0120:56624-0] wal.FSHLog(584): New WAL /user/jenkins/hbase/WALs/p0120.sjc.cloudera.com,56624,1416429702954/p0120.sjc.cloudera.com%2C56624%2C1416429702954.1416429706920.meta 2014-11-19 12:41:46,940 DEBUG [RS_OPEN_META-p0120:56624-0] regionserver.HRegion(4563): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2014-11-19 12:41:46,978 DEBUG [RS_OPEN_META-p0120:56624-0] coprocessor.CoprocessorHost(193): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2014-11-19 12:41:46,982 DEBUG [RS_OPEN_META-p0120:56624-0] regionserver.HRegion(5637): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2014-11-19 12:41:46,985 INFO [RS_OPEN_META-p0120:56624-0] regionserver.RegionCoprocessorHost(229): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2014-11-19 12:41:46,991 DEBUG [RS_OPEN_META-p0120:56624-0] regionserver.MetricsRegionSourceImpl(67): Creating new MetricsRegionSourceImpl for table meta 1588230740 2014-11-19 12:41:46,991 DEBUG [RS_OPEN_META-p0120:56624-0] regionserver.HRegion(641): Instantiated hbase:meta,,1.1588230740 2014-11-19 12:41:47,000 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(88): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2014-11-19 12:41:47,003 DEBUG [StoreOpener-1588230740-1] regionserver.HRegionFileSystem(193): No StoreFiles for: hdfs://localhost:45640/user/jenkins/hbase/data/hbase/meta/1588230740/info 2014-11-19 12:41:47,005 DEBUG [RS_OPEN_META-p0120:56624-0] regionserver.HRegion(3192): Found 0 recovered edits file(s) under hdfs://localhost:45640/user/jenkins/hbase/data/hbase/meta/1588230740 2014-11-19 12:41:47,008 INFO [RS_OPEN_META-p0120:56624-0] regionserver.HRegion(742): Onlined 1588230740; next sequenceid=1 2014-11-19 12:41:47,008 DEBUG [RS_OPEN_META-p0120:56624-0] zookeeper.ZKAssign(644): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Attempting to retransition opening state of node 1588230740 2014-11-19 12:41:47,010 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(1822): Post open deploy tasks for region=hbase:meta,,1.1588230740 2014-11-19 12:41:47,011 INFO [PostOpenDeployTasks:1588230740] zookeeper.MetaRegionTracker(123): Setting hbase:meta region location in ZooKeeper as p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,023 DEBUG [RS:0;p0120:56624-EventThread] zookeeper.ZooKeeperWatcher(310): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/meta-region-server 2014-11-19 12:41:47,023 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/meta-region-server 2014-11-19 12:41:47,026 INFO [PostOpenDeployTasks:1588230740] regionserver.HRegionServer(1847): Finished post open deploy task for hbase:meta,,1.1588230740 2014-11-19 12:41:47,026 DEBUG [RS_OPEN_META-p0120:56624-0] zookeeper.ZKAssign(832): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Transitioning 1588230740 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2014-11-19 12:41:47,190 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/region-in-transition/1588230740 2014-11-19 12:41:47,190 DEBUG [RS_OPEN_META-p0120:56624-0] zookeeper.ZKAssign(907): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Transitioned node 1588230740 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2014-11-19 12:41:47,190 DEBUG [RS_OPEN_META-p0120:56624-0] handler.OpenRegionHandler(379): Transitioned 1588230740 to OPENED in zk on p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,191 DEBUG [RS_OPEN_META-p0120:56624-0] handler.OpenRegionHandler(179): Opened hbase:meta,,1.1588230740 on p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,191 DEBUG [AM.ZK.Worker-pool2-t2] master.AssignmentManager(814): Handling RS_ZK_REGION_OPENED, server=p0120.sjc.cloudera.com,56624,1416429702954, region=1588230740, current_state={1588230740 state=OPENING, ts=1416429706919, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:47,192 INFO [AM.ZK.Worker-pool2-t2] master.RegionStates(316): Transitioned {1588230740 state=OPENING, ts=1416429706919, server=p0120.sjc.cloudera.com,56624,1416429702954} to {1588230740 state=OPEN, ts=1416429707191, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:47,194 INFO [AM.ZK.Worker-pool2-t2] handler.OpenedRegionHandler(147): Handling OPENED of 1588230740 from p0120.sjc.cloudera.com,56624,1416429702954; deleting unassigned node 2014-11-19 12:41:47,237 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/region-in-transition/1588230740 2014-11-19 12:41:47,237 DEBUG [AM.ZK.Worker-pool2-t2] zookeeper.ZKAssign(480): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Deleted unassigned node 1588230740 in expected state RS_ZK_REGION_OPENED 2014-11-19 12:41:47,238 DEBUG [AM.ZK.Worker-pool2-t3] master.AssignmentManager$4(1199): Znode hbase:meta,,1.1588230740 deleted, state: {1588230740 state=OPEN, ts=1416429707191, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:47,239 INFO [AM.ZK.Worker-pool2-t3] master.RegionStates(377): Onlined 1588230740 on p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,240 INFO [M:0;p0120:33095] master.HMaster(1061): hbase:meta assigned=1, rit=false, location=p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,325 INFO [M:0;p0120:33095] catalog.MetaMigrationConvertingToPB(166): hbase:meta doesn't have any entries to update. 2014-11-19 12:41:47,326 INFO [M:0;p0120:33095] catalog.MetaMigrationConvertingToPB(132): META already up-to date with PB serialization 2014-11-19 12:41:47,337 INFO [M:0;p0120:33095] master.AssignmentManager(533): Clean cluster startup. Assigning userregions 2014-11-19 12:41:47,338 DEBUG [M:0;p0120:33095] zookeeper.ZKAssign(498): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Deleting any existing unassigned nodes 2014-11-19 12:41:47,344 INFO [M:0;p0120:33095] master.SnapshotOfRegionAssignmentFromMeta(95): Start to scan the hbase:meta for the current region assignment snappshot 2014-11-19 12:41:47,350 INFO [M:0;p0120:33095] master.SnapshotOfRegionAssignmentFromMeta(138): Finished to scan the hbase:meta for the current region assignmentsnapshot 2014-11-19 12:41:47,365 INFO [M:0;p0120:33095] master.TableNamespaceManager(85): Namespace table not found. Creating... 2014-11-19 12:41:47,409 DEBUG [M:0;p0120:33095] lock.ZKInterProcessLockBase(226): Acquired a lock for /hbase/table-lock/hbase:namespace/write-master:330950000000000 2014-11-19 12:41:47,441 INFO [MASTER_TABLE_OPERATIONS-p0120:33095-0] handler.CreateTableHandler(161): Create table hbase:namespace 2014-11-19 12:41:47,461 INFO [IPC Server handler 1 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741832_1008{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-e87da309-3799-4e45-ab72-12963622a667:NORMAL|FINALIZED]]} size 0 2014-11-19 12:41:47,530 DEBUG [MASTER_TABLE_OPERATIONS-p0120:33095-0] util.FSTableDescriptors(651): Wrote descriptor into: hdfs://localhost:45640/user/jenkins/hbase/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2014-11-19 12:41:47,537 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(4377): creating HRegion hbase:namespace HTD == 'hbase:namespace', {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:45640/user/jenkins/hbase/.tmp Table name == hbase:namespace 2014-11-19 12:41:47,555 INFO [IPC Server handler 9 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741833_1009{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|FINALIZED]]} size 0 2014-11-19 12:41:47,557 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(641): Instantiated hbase:namespace,,1416429707365.f6167bc7f4eee8bd036b8af4bee1bd08. 2014-11-19 12:41:47,558 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1111): Closing hbase:namespace,,1416429707365.f6167bc7f4eee8bd036b8af4bee1bd08.: disabling compactions & flushes 2014-11-19 12:41:47,558 DEBUG [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1138): Updates disabled for region hbase:namespace,,1416429707365.f6167bc7f4eee8bd036b8af4bee1bd08. 2014-11-19 12:41:47,558 INFO [RegionOpenAndInitThread-hbase:namespace-1] regionserver.HRegion(1220): Closed hbase:namespace,,1416429707365.f6167bc7f4eee8bd036b8af4bee1bd08. 2014-11-19 12:41:47,631 INFO [MASTER_TABLE_OPERATIONS-p0120:33095-0] catalog.MetaEditor(279): Added 1 2014-11-19 12:41:47,632 DEBUG [MASTER_TABLE_OPERATIONS-p0120:33095-0] master.AssignmentManager(1481): Assigning 1 region(s) to p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,634 DEBUG [MASTER_TABLE_OPERATIONS-p0120:33095-0] zookeeper.ZKAssign(175): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Async create of unassigned node f6167bc7f4eee8bd036b8af4bee1bd08 with OFFLINE state 2014-11-19 12:41:47,644 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/region-in-transition 2014-11-19 12:41:47,646 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={f6167bc7f4eee8bd036b8af4bee1bd08 state=OFFLINE, ts=1416429707632, server=null}, server=p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,647 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={f6167bc7f4eee8bd036b8af4bee1bd08 state=OFFLINE, ts=1416429707632, server=null}, server=p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,650 INFO [MASTER_TABLE_OPERATIONS-p0120:33095-0] master.AssignmentManager(1532): p0120.sjc.cloudera.com,56624,1416429702954 unassigned znodes=1 of total=1 2014-11-19 12:41:47,650 INFO [MASTER_TABLE_OPERATIONS-p0120:33095-0] master.RegionStates(316): Transitioned {f6167bc7f4eee8bd036b8af4bee1bd08 state=OFFLINE, ts=1416429707634, server=null} to {f6167bc7f4eee8bd036b8af4bee1bd08 state=PENDING_OPEN, ts=1416429707650, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:47,651 INFO [PriorityRpcServer.handler=0,queue=0,port=56624] regionserver.HRegionServer(3770): Open hbase:namespace,,1416429707365.f6167bc7f4eee8bd036b8af4bee1bd08. 2014-11-19 12:41:47,661 DEBUG [RS_OPEN_REGION-p0120:56624-0] zookeeper.ZKAssign(832): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Transitioning f6167bc7f4eee8bd036b8af4bee1bd08 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2014-11-19 12:41:47,662 DEBUG [MASTER_TABLE_OPERATIONS-p0120:33095-0] master.AssignmentManager(1659): Bulk assigning done for p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,678 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/region-in-transition/f6167bc7f4eee8bd036b8af4bee1bd08 2014-11-19 12:41:47,678 DEBUG [RS_OPEN_REGION-p0120:56624-0] zookeeper.ZKAssign(907): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Transitioned node f6167bc7f4eee8bd036b8af4bee1bd08 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2014-11-19 12:41:47,680 DEBUG [RS_OPEN_REGION-p0120:56624-0] regionserver.HRegion(4563): Opening region: {ENCODED => f6167bc7f4eee8bd036b8af4bee1bd08, NAME => 'hbase:namespace,,1416429707365.f6167bc7f4eee8bd036b8af4bee1bd08.', STARTKEY => '', ENDKEY => ''} 2014-11-19 12:41:47,681 DEBUG [RS_OPEN_REGION-p0120:56624-0] regionserver.MetricsRegionSourceImpl(67): Creating new MetricsRegionSourceImpl for table namespace f6167bc7f4eee8bd036b8af4bee1bd08 2014-11-19 12:41:47,682 DEBUG [RS_OPEN_REGION-p0120:56624-0] regionserver.HRegion(641): Instantiated hbase:namespace,,1416429707365.f6167bc7f4eee8bd036b8af4bee1bd08. 2014-11-19 12:41:47,691 INFO [StoreOpener-f6167bc7f4eee8bd036b8af4bee1bd08-1] compactions.CompactionConfiguration(88): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2014-11-19 12:41:47,693 DEBUG [StoreOpener-f6167bc7f4eee8bd036b8af4bee1bd08-1] regionserver.HRegionFileSystem(193): No StoreFiles for: hdfs://localhost:45640/user/jenkins/hbase/data/hbase/namespace/f6167bc7f4eee8bd036b8af4bee1bd08/info 2014-11-19 12:41:47,695 DEBUG [RS_OPEN_REGION-p0120:56624-0] regionserver.HRegion(3192): Found 0 recovered edits file(s) under hdfs://localhost:45640/user/jenkins/hbase/data/hbase/namespace/f6167bc7f4eee8bd036b8af4bee1bd08 2014-11-19 12:41:47,697 INFO [RS_OPEN_REGION-p0120:56624-0] regionserver.HRegion(742): Onlined f6167bc7f4eee8bd036b8af4bee1bd08; next sequenceid=1 2014-11-19 12:41:47,697 DEBUG [RS_OPEN_REGION-p0120:56624-0] zookeeper.ZKAssign(644): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Attempting to retransition opening state of node f6167bc7f4eee8bd036b8af4bee1bd08 2014-11-19 12:41:47,704 DEBUG [AM.ZK.Worker-pool2-t5] master.AssignmentManager(814): Handling RS_ZK_REGION_OPENING, server=p0120.sjc.cloudera.com,56624,1416429702954, region=f6167bc7f4eee8bd036b8af4bee1bd08, current_state={f6167bc7f4eee8bd036b8af4bee1bd08 state=PENDING_OPEN, ts=1416429707650, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:47,704 INFO [AM.ZK.Worker-pool2-t5] master.RegionStates(316): Transitioned {f6167bc7f4eee8bd036b8af4bee1bd08 state=PENDING_OPEN, ts=1416429707650, server=p0120.sjc.cloudera.com,56624,1416429702954} to {f6167bc7f4eee8bd036b8af4bee1bd08 state=OPENING, ts=1416429707704, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:47,704 INFO [PostOpenDeployTasks:f6167bc7f4eee8bd036b8af4bee1bd08] regionserver.HRegionServer(1822): Post open deploy tasks for region=hbase:namespace,,1416429707365.f6167bc7f4eee8bd036b8af4bee1bd08. 2014-11-19 12:41:47,716 INFO [PostOpenDeployTasks:f6167bc7f4eee8bd036b8af4bee1bd08] catalog.MetaEditor(465): Updated row hbase:namespace,,1416429707365.f6167bc7f4eee8bd036b8af4bee1bd08. with server=p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,717 INFO [PostOpenDeployTasks:f6167bc7f4eee8bd036b8af4bee1bd08] regionserver.HRegionServer(1847): Finished post open deploy task for hbase:namespace,,1416429707365.f6167bc7f4eee8bd036b8af4bee1bd08. 2014-11-19 12:41:47,717 DEBUG [RS_OPEN_REGION-p0120:56624-0] zookeeper.ZKAssign(832): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Transitioning f6167bc7f4eee8bd036b8af4bee1bd08 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2014-11-19 12:41:47,727 DEBUG [MASTER_TABLE_OPERATIONS-p0120:33095-0] lock.ZKInterProcessLockBase(328): Released /hbase/table-lock/hbase:namespace/write-master:330950000000000 2014-11-19 12:41:47,727 INFO [MASTER_TABLE_OPERATIONS-p0120:33095-0] handler.CreateTableHandler(192): failed. null 2014-11-19 12:41:47,744 DEBUG [RS_OPEN_REGION-p0120:56624-0] zookeeper.ZKAssign(907): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Transitioned node f6167bc7f4eee8bd036b8af4bee1bd08 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2014-11-19 12:41:47,744 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/region-in-transition/f6167bc7f4eee8bd036b8af4bee1bd08 2014-11-19 12:41:47,744 DEBUG [RS_OPEN_REGION-p0120:56624-0] handler.OpenRegionHandler(379): Transitioned f6167bc7f4eee8bd036b8af4bee1bd08 to OPENED in zk on p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,744 DEBUG [RS_OPEN_REGION-p0120:56624-0] handler.OpenRegionHandler(179): Opened hbase:namespace,,1416429707365.f6167bc7f4eee8bd036b8af4bee1bd08. on p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,746 DEBUG [AM.ZK.Worker-pool2-t6] master.AssignmentManager(814): Handling RS_ZK_REGION_OPENED, server=p0120.sjc.cloudera.com,56624,1416429702954, region=f6167bc7f4eee8bd036b8af4bee1bd08, current_state={f6167bc7f4eee8bd036b8af4bee1bd08 state=OPENING, ts=1416429707704, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:47,746 INFO [AM.ZK.Worker-pool2-t6] master.RegionStates(316): Transitioned {f6167bc7f4eee8bd036b8af4bee1bd08 state=OPENING, ts=1416429707704, server=p0120.sjc.cloudera.com,56624,1416429702954} to {f6167bc7f4eee8bd036b8af4bee1bd08 state=OPEN, ts=1416429707746, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:47,746 DEBUG [AM.ZK.Worker-pool2-t6] handler.OpenedRegionHandler(149): Handling OPENED of f6167bc7f4eee8bd036b8af4bee1bd08 from p0120.sjc.cloudera.com,56624,1416429702954; deleting unassigned node 2014-11-19 12:41:47,756 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/region-in-transition/f6167bc7f4eee8bd036b8af4bee1bd08 2014-11-19 12:41:47,756 DEBUG [AM.ZK.Worker-pool2-t6] zookeeper.ZKAssign(480): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Deleted unassigned node f6167bc7f4eee8bd036b8af4bee1bd08 in expected state RS_ZK_REGION_OPENED 2014-11-19 12:41:47,756 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/region-in-transition 2014-11-19 12:41:47,758 DEBUG [AM.ZK.Worker-pool2-t8] master.AssignmentManager$4(1199): Znode hbase:namespace,,1416429707365.f6167bc7f4eee8bd036b8af4bee1bd08. deleted, state: {f6167bc7f4eee8bd036b8af4bee1bd08 state=OPEN, ts=1416429707746, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:47,758 INFO [AM.ZK.Worker-pool2-t8] master.RegionStates(377): Onlined f6167bc7f4eee8bd036b8af4bee1bd08 on p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:47,845 DEBUG [M:0;p0120:33095] zookeeper.ZKUtil(430): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2014-11-19 12:41:47,856 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2014-11-19 12:41:47,893 DEBUG [M:0;p0120:33095] client.ClientSmallScanner(146): Finished with small scan at {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2014-11-19 12:41:47,964 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2014-11-19 12:41:47,982 DEBUG [pool-1-thread-1-EventThread] hbase.ZKNamespaceManager(196): Updating namespace cache from node default with data: \x0A\x07default 2014-11-19 12:41:48,027 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2014-11-19 12:41:48,042 DEBUG [pool-1-thread-1-EventThread] hbase.ZKNamespaceManager(196): Updating namespace cache from node default with data: \x0A\x07default 2014-11-19 12:41:48,042 DEBUG [pool-1-thread-1-EventThread] hbase.ZKNamespaceManager(196): Updating namespace cache from node hbase with data: \x0A\x05hbase 2014-11-19 12:41:48,056 INFO [M:0;p0120:33095] zookeeper.RecoverableZooKeeper(529): Node /hbase/namespace/default already exists and this is not a retry 2014-11-19 12:41:48,064 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2014-11-19 12:41:48,073 INFO [M:0;p0120:33095] zookeeper.RecoverableZooKeeper(529): Node /hbase/namespace/hbase already exists and this is not a retry 2014-11-19 12:41:48,081 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2014-11-19 12:41:48,083 INFO [M:0;p0120:33095] quotas.MasterQuotaManager(78): Quota support disabled 2014-11-19 12:41:48,083 INFO [M:0;p0120:33095] master.HMaster(956): Master has completed initialization 2014-11-19 12:41:48,164 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(119): Process identifier=hconnection-0x36ec6dbe connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:48,178 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): hconnection-0x36ec6dbe, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:48,179 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(387): hconnection-0x36ec6dbe-0x149c9ca09350007 connected 2014-11-19 12:41:48,189 INFO [pool-1-thread-1] client.HConnectionManager$HConnectionImplementation(1837): Closing zookeeper sessionid=0x149c9ca09350007 2014-11-19 12:41:48,315 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(119): Process identifier=hconnection-0x7f851a17 connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:48,329 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): hconnection-0x7f851a17, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:48,330 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(387): hconnection-0x7f851a17-0x149c9ca09350008 connected 2014-11-19 12:41:48,330 INFO [pool-1-thread-1] hbase.HBaseTestingUtility(910): Minicluster is up 2014-11-19 12:41:48,365 INFO [pool-1-thread-1] hbase.ResourceChecker(147): before: regionserver.TestRegionServerMetrics#testMobMetrics Thread=203, OpenFileDescriptor=313, MaxFileDescriptor=32768, SystemLoadAverage=397, ProcessCount=242, AvailableMemoryMB=4581, ConnectionCount=4 2014-11-19 12:41:48,393 INFO [FifoRpcScheduler.handler1-thread-2] master.HMaster(1767): Client=jenkins//172.17.188.30 create 'testMobMetrics', {NAME => 'd', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', MIN_VERSIONS => '0', TTL => 'FOREVER', MOB_THRESHOLD => '0', IS_MOB => 'true', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 2014-11-19 12:41:48,425 DEBUG [FifoRpcScheduler.handler1-thread-2] lock.ZKInterProcessLockBase(226): Acquired a lock for /hbase/table-lock/testMobMetrics/write-master:330950000000000 2014-11-19 12:41:48,449 INFO [MASTER_TABLE_OPERATIONS-p0120:33095-0] handler.CreateTableHandler(161): Create table testMobMetrics 2014-11-19 12:41:48,469 INFO [IPC Server handler 1 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741834_1010{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-e87da309-3799-4e45-ab72-12963622a667:NORMAL|RBW]]} size 0 2014-11-19 12:41:48,475 DEBUG [MASTER_TABLE_OPERATIONS-p0120:33095-0] util.FSTableDescriptors(651): Wrote descriptor into: hdfs://localhost:45640/user/jenkins/hbase/.tmp/data/default/testMobMetrics/.tabledesc/.tableinfo.0000000001 2014-11-19 12:41:48,479 INFO [RegionOpenAndInitThread-testMobMetrics-1] regionserver.HRegion(4377): creating HRegion testMobMetrics HTD == 'testMobMetrics', {NAME => 'd', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', MIN_VERSIONS => '0', TTL => 'FOREVER', MOB_THRESHOLD => '0', IS_MOB => 'true', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} RootDir = hdfs://localhost:45640/user/jenkins/hbase/.tmp Table name == testMobMetrics 2014-11-19 12:41:48,503 INFO [IPC Server handler 9 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741835_1011{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|RBW]]} size 0 2014-11-19 12:41:48,505 DEBUG [RegionOpenAndInitThread-testMobMetrics-1] regionserver.HRegion(641): Instantiated testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:48,505 DEBUG [RegionOpenAndInitThread-testMobMetrics-1] regionserver.HRegion(1111): Closing testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43.: disabling compactions & flushes 2014-11-19 12:41:48,506 DEBUG [RegionOpenAndInitThread-testMobMetrics-1] regionserver.HRegion(1138): Updates disabled for region testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:48,506 INFO [RegionOpenAndInitThread-testMobMetrics-1] regionserver.HRegion(1220): Closed testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:48,546 INFO [MASTER_TABLE_OPERATIONS-p0120:33095-0] catalog.MetaEditor(279): Added 1 2014-11-19 12:41:48,548 DEBUG [MASTER_TABLE_OPERATIONS-p0120:33095-0] master.AssignmentManager(1481): Assigning 1 region(s) to p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:48,549 DEBUG [MASTER_TABLE_OPERATIONS-p0120:33095-0] zookeeper.ZKAssign(175): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Async create of unassigned node 9a75b9728c76748d2963cbc975504a43 with OFFLINE state 2014-11-19 12:41:48,566 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/region-in-transition 2014-11-19 12:41:48,567 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback(69): rs={9a75b9728c76748d2963cbc975504a43 state=OFFLINE, ts=1416429708548, server=null}, server=p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:48,568 DEBUG [pool-1-thread-1-EventThread] master.OfflineCallback$ExistCallback(106): rs={9a75b9728c76748d2963cbc975504a43 state=OFFLINE, ts=1416429708548, server=null}, server=p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:48,569 INFO [MASTER_TABLE_OPERATIONS-p0120:33095-0] master.AssignmentManager(1532): p0120.sjc.cloudera.com,56624,1416429702954 unassigned znodes=1 of total=1 2014-11-19 12:41:48,570 INFO [MASTER_TABLE_OPERATIONS-p0120:33095-0] master.RegionStates(316): Transitioned {9a75b9728c76748d2963cbc975504a43 state=OFFLINE, ts=1416429708549, server=null} to {9a75b9728c76748d2963cbc975504a43 state=PENDING_OPEN, ts=1416429708570, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:48,571 INFO [PriorityRpcServer.handler=9,queue=0,port=56624] regionserver.HRegionServer(3770): Open testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:48,580 DEBUG [RS_OPEN_REGION-p0120:56624-1] zookeeper.ZKAssign(832): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Transitioning 9a75b9728c76748d2963cbc975504a43 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2014-11-19 12:41:48,580 DEBUG [MASTER_TABLE_OPERATIONS-p0120:33095-0] master.AssignmentManager(1659): Bulk assigning done for p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:48,756 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/region-in-transition/9a75b9728c76748d2963cbc975504a43 2014-11-19 12:41:48,756 DEBUG [RS_OPEN_REGION-p0120:56624-1] zookeeper.ZKAssign(907): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Transitioned node 9a75b9728c76748d2963cbc975504a43 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 2014-11-19 12:41:48,757 DEBUG [RS_OPEN_REGION-p0120:56624-1] regionserver.HRegion(4563): Opening region: {ENCODED => 9a75b9728c76748d2963cbc975504a43, NAME => 'testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43.', STARTKEY => '', ENDKEY => ''} 2014-11-19 12:41:48,757 DEBUG [RS_OPEN_REGION-p0120:56624-1] regionserver.MetricsRegionSourceImpl(67): Creating new MetricsRegionSourceImpl for table testMobMetrics 9a75b9728c76748d2963cbc975504a43 2014-11-19 12:41:48,758 DEBUG [RS_OPEN_REGION-p0120:56624-1] regionserver.HRegion(641): Instantiated testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:48,774 INFO [StoreOpener-9a75b9728c76748d2963cbc975504a43-1] compactions.CompactionConfiguration(88): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2014-11-19 12:41:48,784 DEBUG [StoreOpener-9a75b9728c76748d2963cbc975504a43-1] regionserver.HRegionFileSystem(193): No StoreFiles for: hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d 2014-11-19 12:41:48,786 DEBUG [AM.ZK.Worker-pool2-t10] master.AssignmentManager(814): Handling RS_ZK_REGION_OPENING, server=p0120.sjc.cloudera.com,56624,1416429702954, region=9a75b9728c76748d2963cbc975504a43, current_state={9a75b9728c76748d2963cbc975504a43 state=PENDING_OPEN, ts=1416429708570, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:48,786 INFO [AM.ZK.Worker-pool2-t10] master.RegionStates(316): Transitioned {9a75b9728c76748d2963cbc975504a43 state=PENDING_OPEN, ts=1416429708570, server=p0120.sjc.cloudera.com,56624,1416429702954} to {9a75b9728c76748d2963cbc975504a43 state=OPENING, ts=1416429708786, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:48,787 DEBUG [RS_OPEN_REGION-p0120:56624-1] regionserver.HRegion(3192): Found 0 recovered edits file(s) under hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43 2014-11-19 12:41:48,793 INFO [RS_OPEN_REGION-p0120:56624-1] regionserver.HRegion(742): Onlined 9a75b9728c76748d2963cbc975504a43; next sequenceid=1 2014-11-19 12:41:48,793 DEBUG [RS_OPEN_REGION-p0120:56624-1] zookeeper.ZKAssign(644): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Attempting to retransition opening state of node 9a75b9728c76748d2963cbc975504a43 2014-11-19 12:41:48,795 INFO [PostOpenDeployTasks:9a75b9728c76748d2963cbc975504a43] regionserver.HRegionServer(1822): Post open deploy tasks for region=testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:48,837 DEBUG [MASTER_TABLE_OPERATIONS-p0120:33095-0] lock.ZKInterProcessLockBase(328): Released /hbase/table-lock/testMobMetrics/write-master:330950000000000 2014-11-19 12:41:48,837 INFO [MASTER_TABLE_OPERATIONS-p0120:33095-0] handler.CreateTableHandler(192): failed. null 2014-11-19 12:41:48,843 INFO [PostOpenDeployTasks:9a75b9728c76748d2963cbc975504a43] catalog.MetaEditor(465): Updated row testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. with server=p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:48,843 INFO [PostOpenDeployTasks:9a75b9728c76748d2963cbc975504a43] regionserver.HRegionServer(1847): Finished post open deploy task for testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:48,843 DEBUG [RS_OPEN_REGION-p0120:56624-1] zookeeper.ZKAssign(832): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Transitioning 9a75b9728c76748d2963cbc975504a43 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2014-11-19 12:41:48,892 DEBUG [RS_OPEN_REGION-p0120:56624-1] zookeeper.ZKAssign(907): regionserver:56624-0x149c9ca09350001, quorum=localhost:64128, baseZNode=/hbase Transitioned node 9a75b9728c76748d2963cbc975504a43 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED 2014-11-19 12:41:48,892 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/region-in-transition/9a75b9728c76748d2963cbc975504a43 2014-11-19 12:41:48,893 DEBUG [RS_OPEN_REGION-p0120:56624-1] handler.OpenRegionHandler(379): Transitioned 9a75b9728c76748d2963cbc975504a43 to OPENED in zk on p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:48,893 DEBUG [RS_OPEN_REGION-p0120:56624-1] handler.OpenRegionHandler(179): Opened testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. on p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:48,895 DEBUG [AM.ZK.Worker-pool2-t11] master.AssignmentManager(814): Handling RS_ZK_REGION_OPENED, server=p0120.sjc.cloudera.com,56624,1416429702954, region=9a75b9728c76748d2963cbc975504a43, current_state={9a75b9728c76748d2963cbc975504a43 state=OPENING, ts=1416429708786, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:48,895 INFO [AM.ZK.Worker-pool2-t11] master.RegionStates(316): Transitioned {9a75b9728c76748d2963cbc975504a43 state=OPENING, ts=1416429708786, server=p0120.sjc.cloudera.com,56624,1416429702954} to {9a75b9728c76748d2963cbc975504a43 state=OPEN, ts=1416429708895, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:48,895 DEBUG [AM.ZK.Worker-pool2-t11] handler.OpenedRegionHandler(149): Handling OPENED of 9a75b9728c76748d2963cbc975504a43 from p0120.sjc.cloudera.com,56624,1416429702954; deleting unassigned node 2014-11-19 12:41:48,912 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/region-in-transition/9a75b9728c76748d2963cbc975504a43 2014-11-19 12:41:48,913 DEBUG [AM.ZK.Worker-pool2-t11] zookeeper.ZKAssign(480): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Deleted unassigned node 9a75b9728c76748d2963cbc975504a43 in expected state RS_ZK_REGION_OPENED 2014-11-19 12:41:48,913 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): master:33095-0x149c9ca09350000, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/region-in-transition 2014-11-19 12:41:48,913 DEBUG [AM.ZK.Worker-pool2-t12] master.AssignmentManager$4(1199): Znode testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. deleted, state: {9a75b9728c76748d2963cbc975504a43 state=OPEN, ts=1416429708895, server=p0120.sjc.cloudera.com,56624,1416429702954} 2014-11-19 12:41:48,913 INFO [AM.ZK.Worker-pool2-t12] master.RegionStates(377): Onlined 9a75b9728c76748d2963cbc975504a43 on p0120.sjc.cloudera.com,56624,1416429702954 2014-11-19 12:41:49,100 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(119): Process identifier=catalogtracker-on-hconnection-0x7f851a17 connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:49,101 DEBUG [pool-1-thread-1] catalog.CatalogTracker(197): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@7878966d 2014-11-19 12:41:49,142 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): catalogtracker-on-hconnection-0x7f851a17, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:49,142 DEBUG [pool-1-thread-1] zookeeper.ZKUtil(428): catalogtracker-on-hconnection-0x7f851a17, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server 2014-11-19 12:41:49,144 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(387): catalogtracker-on-hconnection-0x7f851a17-0x149c9ca09350009 connected 2014-11-19 12:41:49,156 DEBUG [pool-1-thread-1] catalog.CatalogTracker(221): Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@7878966d 2014-11-19 12:41:49,196 INFO [pool-1-thread-1] hbase.Waiter(174): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2014-11-19 12:41:49,215 DEBUG [pool-1-thread-1] client.ClientSmallScanner(146): Finished with small scan at {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2014-11-19 12:41:49,223 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(119): Process identifier=catalogtracker-on-hconnection-0x7f851a17 connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:49,224 DEBUG [pool-1-thread-1] catalog.CatalogTracker(197): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@32627f9e 2014-11-19 12:41:49,270 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): catalogtracker-on-hconnection-0x7f851a17, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:49,271 DEBUG [pool-1-thread-1] zookeeper.ZKUtil(428): catalogtracker-on-hconnection-0x7f851a17, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server 2014-11-19 12:41:49,272 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(387): catalogtracker-on-hconnection-0x7f851a17-0x149c9ca0935000a connected 2014-11-19 12:41:49,298 INFO [PriorityRpcServer.handler=5,queue=0,port=56624] regionserver.HRegionServer(3937): Flushing testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:49,300 INFO [PriorityRpcServer.handler=5,queue=0,port=56624] regionserver.HRegion(1691): Started memstore flush for testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43., current region memstore size 168 2014-11-19 12:41:49,407 INFO [IPC Server handler 5 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-e87da309-3799-4e45-ab72-12963622a667:NORMAL|FINALIZED]]} size 0 2014-11-19 12:41:49,468 INFO [PriorityRpcServer.handler=5,queue=0,port=56624] regionserver.HMobStore(223): Renaming flushed file from hdfs://localhost:45640/user/jenkins/hbase/mobdir/.tmp/d41d8cd98f00b204e9800998ecf8427e20141119e79cfc0063ae4ef1a53c96385e433aca to hdfs://localhost:45640/user/jenkins/hbase/mobdir/data/default/testMobMetrics/7799e477d8a400bf8295a1af7a73a13b/d/d41d8cd98f00b204e9800998ecf8427e20141119e79cfc0063ae4ef1a53c96385e433aca 2014-11-19 12:41:49,494 INFO [IPC Server handler 0 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741837_1013{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|RBW]]} size 0 2014-11-19 12:41:49,574 INFO [PriorityRpcServer.handler=5,queue=0,port=56624] mob.DefaultMobStoreFlusher(129): Flushed, sequenceid=3, memsize=168, hasBloomFilter=true, into tmp file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/14ea0c361e2348ab816879cd52147bc2 2014-11-19 12:41:49,595 DEBUG [PriorityRpcServer.handler=5,queue=0,port=56624] regionserver.HRegionFileSystem(376): Committing store file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/14ea0c361e2348ab816879cd52147bc2 as hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 2014-11-19 12:41:49,614 INFO [PriorityRpcServer.handler=5,queue=0,port=56624] regionserver.HStore(882): Added hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2, entries=1, sequenceid=3, filesize=4.9 K 2014-11-19 12:41:49,615 INFO [PriorityRpcServer.handler=5,queue=0,port=56624] regionserver.HRegion(1837): Finished memstore flush of ~168/168, currentsize=0/0 for region testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. in 315ms, sequenceid=3, compaction requested=false 2014-11-19 12:41:49,617 DEBUG [pool-1-thread-1] catalog.CatalogTracker(221): Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@32627f9e 2014-11-19 12:41:49,648 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(119): Process identifier=catalogtracker-on-hconnection-0x7f851a17 connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:49,649 DEBUG [pool-1-thread-1] catalog.CatalogTracker(197): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@434b86f1 2014-11-19 12:41:49,682 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): catalogtracker-on-hconnection-0x7f851a17, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:49,683 DEBUG [pool-1-thread-1] zookeeper.ZKUtil(428): catalogtracker-on-hconnection-0x7f851a17, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server 2014-11-19 12:41:49,684 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(387): catalogtracker-on-hconnection-0x7f851a17-0x149c9ca0935000b connected 2014-11-19 12:41:49,705 INFO [PriorityRpcServer.handler=6,queue=0,port=56624] regionserver.HRegionServer(3937): Flushing testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:49,705 INFO [PriorityRpcServer.handler=6,queue=0,port=56624] regionserver.HRegion(1691): Started memstore flush for testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43., current region memstore size 168 2014-11-19 12:41:49,730 INFO [IPC Server handler 1 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741838_1014{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-e87da309-3799-4e45-ab72-12963622a667:NORMAL|FINALIZED]]} size 0 2014-11-19 12:41:49,750 INFO [PriorityRpcServer.handler=6,queue=0,port=56624] regionserver.HMobStore(223): Renaming flushed file from hdfs://localhost:45640/user/jenkins/hbase/mobdir/.tmp/d41d8cd98f00b204e9800998ecf8427e201411192b352ab39dd1485cae989314c96b8d0b to hdfs://localhost:45640/user/jenkins/hbase/mobdir/data/default/testMobMetrics/7799e477d8a400bf8295a1af7a73a13b/d/d41d8cd98f00b204e9800998ecf8427e201411192b352ab39dd1485cae989314c96b8d0b 2014-11-19 12:41:49,769 INFO [IPC Server handler 9 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741839_1015{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|RBW]]} size 0 2014-11-19 12:41:49,771 INFO [PriorityRpcServer.handler=6,queue=0,port=56624] mob.DefaultMobStoreFlusher(129): Flushed, sequenceid=5, memsize=168, hasBloomFilter=true, into tmp file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/024978dcf45641f2a39c9e79e4b183cf 2014-11-19 12:41:49,792 DEBUG [PriorityRpcServer.handler=6,queue=0,port=56624] regionserver.HRegionFileSystem(376): Committing store file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/024978dcf45641f2a39c9e79e4b183cf as hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf 2014-11-19 12:41:49,812 INFO [PriorityRpcServer.handler=6,queue=0,port=56624] regionserver.HStore(882): Added hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf, entries=1, sequenceid=5, filesize=4.9 K 2014-11-19 12:41:49,812 INFO [PriorityRpcServer.handler=6,queue=0,port=56624] regionserver.HRegion(1837): Finished memstore flush of ~168/168, currentsize=0/0 for region testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. in 107ms, sequenceid=5, compaction requested=false 2014-11-19 12:41:49,813 DEBUG [pool-1-thread-1] catalog.CatalogTracker(221): Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@434b86f1 2014-11-19 12:41:49,829 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(119): Process identifier=catalogtracker-on-hconnection-0x7f851a17 connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:49,830 DEBUG [pool-1-thread-1] catalog.CatalogTracker(197): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@2d59a8aa 2014-11-19 12:41:49,840 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): catalogtracker-on-hconnection-0x7f851a17, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:49,841 DEBUG [pool-1-thread-1] zookeeper.ZKUtil(428): catalogtracker-on-hconnection-0x7f851a17, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server 2014-11-19 12:41:49,841 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(387): catalogtracker-on-hconnection-0x7f851a17-0x149c9ca0935000c connected 2014-11-19 12:41:49,865 INFO [PriorityRpcServer.handler=7,queue=0,port=56624] regionserver.HRegionServer(3937): Flushing testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:49,865 INFO [PriorityRpcServer.handler=7,queue=0,port=56624] regionserver.HRegion(1691): Started memstore flush for testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43., current region memstore size 168 2014-11-19 12:41:49,889 INFO [IPC Server handler 7 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741840_1016{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-e87da309-3799-4e45-ab72-12963622a667:NORMAL|FINALIZED]]} size 0 2014-11-19 12:41:50,014 INFO [PriorityRpcServer.handler=7,queue=0,port=56624] regionserver.HMobStore(223): Renaming flushed file from hdfs://localhost:45640/user/jenkins/hbase/mobdir/.tmp/d41d8cd98f00b204e9800998ecf8427e20141119b84a3dd8393c41c4afcfa35da4b8fc16 to hdfs://localhost:45640/user/jenkins/hbase/mobdir/data/default/testMobMetrics/7799e477d8a400bf8295a1af7a73a13b/d/d41d8cd98f00b204e9800998ecf8427e20141119b84a3dd8393c41c4afcfa35da4b8fc16 2014-11-19 12:41:50,080 INFO [IPC Server handler 4 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741841_1017{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|FINALIZED]]} size 0 2014-11-19 12:41:50,081 INFO [PriorityRpcServer.handler=7,queue=0,port=56624] mob.DefaultMobStoreFlusher(129): Flushed, sequenceid=7, memsize=168, hasBloomFilter=true, into tmp file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/cbc9c78ebb0f45c196b43a8ba0743d98 2014-11-19 12:41:50,097 DEBUG [PriorityRpcServer.handler=7,queue=0,port=56624] regionserver.HRegionFileSystem(376): Committing store file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/cbc9c78ebb0f45c196b43a8ba0743d98 as hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 2014-11-19 12:41:50,149 INFO [PriorityRpcServer.handler=7,queue=0,port=56624] regionserver.HStore(882): Added hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98, entries=1, sequenceid=7, filesize=4.9 K 2014-11-19 12:41:50,149 INFO [PriorityRpcServer.handler=7,queue=0,port=56624] regionserver.HRegion(1837): Finished memstore flush of ~168/168, currentsize=0/0 for region testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. in 284ms, sequenceid=7, compaction requested=true 2014-11-19 12:41:50,152 DEBUG [PriorityRpcServer.handler=7,queue=0,port=56624] regionserver.CompactSplitThread(322): Small Compaction requested: system; Because: Compaction through user triggered flush; compaction_queue=(0:0), split_queue=0, merge_queue=0 2014-11-19 12:41:50,152 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] compactions.RatioBasedCompactionPolicy(92): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 10 blocking 2014-11-19 12:41:50,152 DEBUG [pool-1-thread-1] catalog.CatalogTracker(221): Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@2d59a8aa 2014-11-19 12:41:50,153 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] compactions.ExploringCompactionPolicy(122): Exploring compaction algorithm has selected 3 files of size 15069 starting at candidate #0 after considering 1 permutations with 1 in ratio 2014-11-19 12:41:50,154 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HStore(1464): 9a75b9728c76748d2963cbc975504a43 - d: Initiating major compaction 2014-11-19 12:41:50,155 INFO [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HRegion(1477): Starting compaction on d in region testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:50,157 INFO [RS:0;p0120:56624-smallCompactions-1416429710151] zookeeper.RecoverableZooKeeper(119): Process identifier=abb7bdaf3d394988b4e6ea10ffbafb52 connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:50,199 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151-EventThread] zookeeper.ZooKeeperWatcher(310): abb7bdaf3d394988b4e6ea10ffbafb52, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:50,200 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151-EventThread] zookeeper.ZooKeeperWatcher(387): abb7bdaf3d394988b4e6ea10ffbafb52-0x149c9ca0935000d connected 2014-11-19 12:41:50,248 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] zookeeper.ZKUtil(428): abb7bdaf3d394988b4e6ea10ffbafb52-0x149c9ca0935000d, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/MOB/testMobMetrics:d-lock 2014-11-19 12:41:50,248 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] mob.MobZookeeper(108): Locked the column family testMobMetrics:d 2014-11-19 12:41:50,248 INFO [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HMobStore(365): Obtain the lock for the store[d], ready to perform the major compaction 2014-11-19 12:41:50,266 INFO [StoreOpener-9a75b9728c76748d2963cbc975504a43-1] compactions.CompactionConfiguration(88): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2014-11-19 12:41:50,284 DEBUG [StoreOpener-9a75b9728c76748d2963cbc975504a43-1] regionserver.HStore(551): loaded hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf, isReference=false, isBulkLoadResult=false, seqid=5, majorCompaction=false 2014-11-19 12:41:50,294 DEBUG [StoreOpener-9a75b9728c76748d2963cbc975504a43-1] regionserver.HStore(551): loaded hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2, isReference=false, isBulkLoadResult=false, seqid=3, majorCompaction=false 2014-11-19 12:41:50,305 DEBUG [StoreOpener-9a75b9728c76748d2963cbc975504a43-1] regionserver.HStore(551): loaded hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98, isReference=false, isBulkLoadResult=false, seqid=7, majorCompaction=false 2014-11-19 12:41:50,305 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] zookeeper.ZKUtil(428): abb7bdaf3d394988b4e6ea10ffbafb52-0x149c9ca0935000d, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/MOB/testMobMetrics:d-majorCompaction/abb7bdaf3d394988b4e6ea10ffbafb52 2014-11-19 12:41:50,306 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] mob.MobZookeeper(125): Unlocking the column family testMobMetrics:d 2014-11-19 12:41:50,309 DEBUG [pool-1-thread-1] regionserver.HRegion(3192): Found 0 recovered edits file(s) under hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43 2014-11-19 12:41:50,312 INFO [pool-1-thread-1] regionserver.HRegion(742): Onlined 9a75b9728c76748d2963cbc975504a43; next sequenceid=8 2014-11-19 12:41:50,313 DEBUG [pool-1-thread-1] compactions.RatioBasedCompactionPolicy(92): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 10 blocking 2014-11-19 12:41:50,313 DEBUG [pool-1-thread-1] regionserver.HStore(1464): 9a75b9728c76748d2963cbc975504a43 - d: Initiating major compaction 2014-11-19 12:41:50,314 INFO [pool-1-thread-1] regionserver.HRegion(1477): Starting compaction on d in region testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:50,315 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(119): Process identifier=32f1ec2197424d0b945b878ce00c90de connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:50,348 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151-EventThread] zookeeper.ZooKeeperWatcher(310): abb7bdaf3d394988b4e6ea10ffbafb52-0x149c9ca0935000d, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/MOB/testMobMetrics:d-lock 2014-11-19 12:41:50,348 INFO [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HStore(1104): Starting compaction of 3 file(s) in d of testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. into tmpdir=hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp, totalSize=14.7 K 2014-11-19 12:41:50,349 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2, keycount=1, bloomtype=ROW, size=4.9 K, encoding=NONE, seqNum=3, earliestPutTs=1416429709218 2014-11-19 12:41:50,349 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf, keycount=1, bloomtype=ROW, size=4.9 K, encoding=NONE, seqNum=5, earliestPutTs=1416429709643 2014-11-19 12:41:50,350 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98, keycount=1, bloomtype=ROW, size=4.9 K, encoding=NONE, seqNum=7, earliestPutTs=1416429709825 2014-11-19 12:41:50,403 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): 32f1ec2197424d0b945b878ce00c90de, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:50,405 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(387): 32f1ec2197424d0b945b878ce00c90de-0x149c9ca0935000e connected 2014-11-19 12:41:50,425 DEBUG [pool-1-thread-1] zookeeper.ZKUtil(428): 32f1ec2197424d0b945b878ce00c90de-0x149c9ca0935000e, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/MOB/testMobMetrics:d-lock 2014-11-19 12:41:50,425 INFO [IPC Server handler 0 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741842_1018{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|RBW]]} size 4489 2014-11-19 12:41:50,425 DEBUG [pool-1-thread-1] mob.MobZookeeper(108): Locked the column family testMobMetrics:d 2014-11-19 12:41:50,426 INFO [pool-1-thread-1] regionserver.HMobStore(365): Obtain the lock for the store[d], ready to perform the major compaction 2014-11-19 12:41:50,482 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(529): Node /hbase/MOB/testMobMetrics:d-majorCompaction already exists and this is not a retry 2014-11-19 12:41:50,509 DEBUG [pool-1-thread-1] zookeeper.ZKUtil(428): 32f1ec2197424d0b945b878ce00c90de-0x149c9ca0935000e, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/MOB/testMobMetrics:d-majorCompaction/32f1ec2197424d0b945b878ce00c90de 2014-11-19 12:41:50,509 DEBUG [pool-1-thread-1] mob.MobZookeeper(125): Unlocking the column family testMobMetrics:d 2014-11-19 12:41:50,540 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): 32f1ec2197424d0b945b878ce00c90de-0x149c9ca0935000e, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/MOB/testMobMetrics:d-lock 2014-11-19 12:41:50,541 INFO [pool-1-thread-1] regionserver.HStore(1104): Starting compaction of 3 file(s) in d of testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. into tmpdir=hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp, totalSize=14.7 K 2014-11-19 12:41:50,541 DEBUG [pool-1-thread-1] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2, keycount=1, bloomtype=ROW, size=4.9 K, encoding=NONE, seqNum=3, earliestPutTs=1416429709218 2014-11-19 12:41:50,542 DEBUG [pool-1-thread-1] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf, keycount=1, bloomtype=ROW, size=4.9 K, encoding=NONE, seqNum=5, earliestPutTs=1416429709643 2014-11-19 12:41:50,543 DEBUG [pool-1-thread-1] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98, keycount=1, bloomtype=ROW, size=4.9 K, encoding=NONE, seqNum=7, earliestPutTs=1416429709825 2014-11-19 12:41:50,631 INFO [IPC Server handler 0 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741843_1019{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|RBW]]} size 0 2014-11-19 12:41:50,633 INFO [IPC Server handler 6 on 45640] blockmanagement.BlockManager(1076): BLOCK* addToInvalidates: blk_1073741843_1019 127.0.0.1:52665 2014-11-19 12:41:50,645 INFO [IPC Server handler 7 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741844_1020{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-e87da309-3799-4e45-ab72-12963622a667:NORMAL|RBW]]} size 0 2014-11-19 12:41:50,660 DEBUG [pool-1-thread-1] regionserver.HRegionFileSystem(376): Committing store file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/3781362dc6c94dffba9908c16871a682 as hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/3781362dc6c94dffba9908c16871a682 2014-11-19 12:41:50,678 DEBUG [pool-1-thread-1] regionserver.HStore(1535): Removing store files after compaction... 2014-11-19 12:41:50,695 DEBUG [pool-1-thread-1] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2, to hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 2014-11-19 12:41:50,701 DEBUG [pool-1-thread-1] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf, to hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf 2014-11-19 12:41:50,706 DEBUG [pool-1-thread-1] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98, to hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 2014-11-19 12:41:50,707 INFO [pool-1-thread-1] regionserver.HStore(1234): Completed major compaction of 3 file(s) in d of testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. into 3781362dc6c94dffba9908c16871a682(size=4.8 K), total size for store is 4.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2014-11-19 12:41:50,723 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): 32f1ec2197424d0b945b878ce00c90de-0x149c9ca0935000e, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/MOB/testMobMetrics:d-majorCompaction/32f1ec2197424d0b945b878ce00c90de 2014-11-19 12:41:50,820 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(119): Process identifier=catalogtracker-on-hconnection-0x7f851a17 connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:50,822 DEBUG [pool-1-thread-1] catalog.CatalogTracker(197): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@63303a3a 2014-11-19 12:41:50,829 INFO [IPC Server handler 5 on 45640] blockmanagement.BlockManager(1076): BLOCK* addToInvalidates: blk_1073741842_1018 127.0.0.1:52665 2014-11-19 12:41:50,838 INFO [IPC Server handler 3 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741845_1021{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|RBW]]} size 0 2014-11-19 12:41:50,840 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): catalogtracker-on-hconnection-0x7f851a17, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:50,841 DEBUG [pool-1-thread-1] zookeeper.ZKUtil(428): catalogtracker-on-hconnection-0x7f851a17, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server 2014-11-19 12:41:50,842 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(387): catalogtracker-on-hconnection-0x7f851a17-0x149c9ca0935000f connected 2014-11-19 12:41:50,858 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HRegionFileSystem(376): Committing store file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/f14951c08aca4f4bb256514c8dfcb98c as hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/f14951c08aca4f4bb256514c8dfcb98c 2014-11-19 12:41:50,863 INFO [PriorityRpcServer.handler=8,queue=0,port=56624] regionserver.HRegionServer(3937): Flushing testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:50,864 INFO [PriorityRpcServer.handler=8,queue=0,port=56624] regionserver.HRegion(1691): Started memstore flush for testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43., current region memstore size 168 2014-11-19 12:41:50,875 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HStore(1535): Removing store files after compaction... 2014-11-19 12:41:50,881 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(381): File:hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 already exists in archive, moving to timestamped backup and overwriting current. 2014-11-19 12:41:50,886 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(397): Backed up archive file from hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 2014-11-19 12:41:50,887 INFO [IPC Server handler 4 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741846_1022{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-e87da309-3799-4e45-ab72-12963622a667:NORMAL|FINALIZED]]} size 0 2014-11-19 12:41:50,889 WARN [IPC Server handler 8 on 45640] security.UserGroupInformation(1645): PriviledgedActionException as:jenkins.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 does not exist. 2014-11-19 12:41:50,890 INFO [IPC Server handler 9 on 45640] blockmanagement.BlockManager(1076): BLOCK* addToInvalidates: blk_1073741846_1022 127.0.0.1:52665 2014-11-19 12:41:50,896 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(427): Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 on try #0 java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2765) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:473) at org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1678) at org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:586) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:425) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:335) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284) at org.apache.hadoop.hbase.backup.HFileArchiver.archiveStoreFiles(HFileArchiver.java:231) at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.removeStoreFiles(HRegionFileSystem.java:419) at org.apache.hadoop.hbase.regionserver.HStore.completeCompaction(HStore.java:1539) at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1139) at org.apache.hadoop.hbase.regionserver.HMobStore.compact(HMobStore.java:384) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1483) at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:478) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy20.setTimes(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:822) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy21.setTimes(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294) at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2763) ... 20 more 2014-11-19 12:41:50,900 INFO [IPC Server handler 2 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741847_1023{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|RBW]]} size 0 2014-11-19 12:41:50,902 INFO [PriorityRpcServer.handler=8,queue=0,port=56624] mob.DefaultMobStoreFlusher(129): Flushed, sequenceid=10, memsize=168, hasBloomFilter=true, into tmp file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/f59077c0a0e841be97e370d475ee90a1 2014-11-19 12:41:50,904 WARN [IPC Server handler 6 on 45640] security.UserGroupInformation(1645): PriviledgedActionException as:jenkins.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 does not exist. 2014-11-19 12:41:50,905 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(427): Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 on try #1 java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2765) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:473) at org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1678) at org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:586) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:425) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:335) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284) at org.apache.hadoop.hbase.backup.HFileArchiver.archiveStoreFiles(HFileArchiver.java:231) at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.removeStoreFiles(HRegionFileSystem.java:419) at org.apache.hadoop.hbase.regionserver.HStore.completeCompaction(HStore.java:1539) at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1139) at org.apache.hadoop.hbase.regionserver.HMobStore.compact(HMobStore.java:384) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1483) at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:478) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy20.setTimes(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:822) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy21.setTimes(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294) at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2763) ... 20 more 2014-11-19 12:41:50,909 WARN [IPC Server handler 9 on 45640] security.UserGroupInformation(1645): PriviledgedActionException as:jenkins.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 does not exist. 2014-11-19 12:41:50,910 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(427): Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 on try #2 java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2765) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:473) at org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1678) at org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:586) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:425) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:335) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284) at org.apache.hadoop.hbase.backup.HFileArchiver.archiveStoreFiles(HFileArchiver.java:231) at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.removeStoreFiles(HRegionFileSystem.java:419) at org.apache.hadoop.hbase.regionserver.HStore.completeCompaction(HStore.java:1539) at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1139) at org.apache.hadoop.hbase.regionserver.HMobStore.compact(HMobStore.java:384) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1483) at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:478) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy20.setTimes(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:822) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy21.setTimes(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294) at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2763) ... 20 more 2014-11-19 12:41:50,911 ERROR [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(433): Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 2014-11-19 12:41:50,911 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(336): Couldn't archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2 into backup directory: hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d 2014-11-19 12:41:50,913 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(381): File:hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf already exists in archive, moving to timestamped backup and overwriting current. 2014-11-19 12:41:50,915 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(397): Backed up archive file from hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf 2014-11-19 12:41:50,916 WARN [IPC Server handler 3 on 45640] security.UserGroupInformation(1645): PriviledgedActionException as:jenkins.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf does not exist. 2014-11-19 12:41:50,917 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(427): Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf on try #0 java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2765) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:473) at org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1678) at org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:586) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:425) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:335) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284) at org.apache.hadoop.hbase.backup.HFileArchiver.archiveStoreFiles(HFileArchiver.java:231) at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.removeStoreFiles(HRegionFileSystem.java:419) at org.apache.hadoop.hbase.regionserver.HStore.completeCompaction(HStore.java:1539) at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1139) at org.apache.hadoop.hbase.regionserver.HMobStore.compact(HMobStore.java:384) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1483) at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:478) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy20.setTimes(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:822) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy21.setTimes(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294) at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2763) ... 20 more 2014-11-19 12:41:50,918 DEBUG [PriorityRpcServer.handler=8,queue=0,port=56624] regionserver.HRegionFileSystem(376): Committing store file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/f59077c0a0e841be97e370d475ee90a1 as hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/f59077c0a0e841be97e370d475ee90a1 2014-11-19 12:41:50,922 WARN [IPC Server handler 8 on 45640] security.UserGroupInformation(1645): PriviledgedActionException as:jenkins.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf does not exist. 2014-11-19 12:41:50,922 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(427): Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf on try #1 java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2765) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:473) at org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1678) at org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:586) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:425) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:335) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284) at org.apache.hadoop.hbase.backup.HFileArchiver.archiveStoreFiles(HFileArchiver.java:231) at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.removeStoreFiles(HRegionFileSystem.java:419) at org.apache.hadoop.hbase.regionserver.HStore.completeCompaction(HStore.java:1539) at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1139) at org.apache.hadoop.hbase.regionserver.HMobStore.compact(HMobStore.java:384) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1483) at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:478) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy20.setTimes(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:822) at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy21.setTimes(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294) at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2763) ... 20 more 2014-11-19 12:41:50,927 WARN [IPC Server handler 0 on 45640] security.UserGroupInformation(1645): PriviledgedActionException as:jenkins.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf does not exist. 2014-11-19 12:41:50,928 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(427): Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf on try #2 java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2765) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:473) at org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1678) at org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:586) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:425) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:335) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284) at org.apache.hadoop.hbase.backup.HFileArchiver.archiveStoreFiles(HFileArchiver.java:231) at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.removeStoreFiles(HRegionFileSystem.java:419) at org.apache.hadoop.hbase.regionserver.HStore.completeCompaction(HStore.java:1539) at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1139) at org.apache.hadoop.hbase.regionserver.HMobStore.compact(HMobStore.java:384) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1483) at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:478) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy20.setTimes(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:822) at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy21.setTimes(Unknown Source) at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294) at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2763) ... 20 more 2014-11-19 12:41:50,930 ERROR [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(433): Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf 2014-11-19 12:41:50,930 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(336): Couldn't archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf into backup directory: hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d 2014-11-19 12:41:50,932 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(381): File:hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 already exists in archive, moving to timestamped backup and overwriting current. 2014-11-19 12:41:50,933 INFO [PriorityRpcServer.handler=8,queue=0,port=56624] regionserver.HStore(882): Added hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/f59077c0a0e841be97e370d475ee90a1, entries=1, sequenceid=10, filesize=4.8 K 2014-11-19 12:41:50,934 INFO [PriorityRpcServer.handler=8,queue=0,port=56624] regionserver.HRegion(1837): Finished memstore flush of ~168/168, currentsize=0/0 for region testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. in 70ms, sequenceid=10, compaction requested=false 2014-11-19 12:41:50,934 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(397): Backed up archive file from hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 2014-11-19 12:41:50,934 DEBUG [pool-1-thread-1] catalog.CatalogTracker(221): Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@63303a3a 2014-11-19 12:41:50,936 WARN [IPC Server handler 7 on 45640] security.UserGroupInformation(1645): PriviledgedActionException as:jenkins.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 does not exist. 2014-11-19 12:41:50,937 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(427): Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 on try #0 java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2765) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:473) at org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1678) at org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:586) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:425) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:335) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284) at org.apache.hadoop.hbase.backup.HFileArchiver.archiveStoreFiles(HFileArchiver.java:231) at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.removeStoreFiles(HRegionFileSystem.java:419) at org.apache.hadoop.hbase.regionserver.HStore.completeCompaction(HStore.java:1539) at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1139) at org.apache.hadoop.hbase.regionserver.HMobStore.compact(HMobStore.java:384) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1483) at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:478) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy20.setTimes(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:822) at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy21.setTimes(Unknown Source) at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294) at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2763) ... 20 more 2014-11-19 12:41:50,940 WARN [IPC Server handler 9 on 45640] security.UserGroupInformation(1645): PriviledgedActionException as:jenkins.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 does not exist. 2014-11-19 12:41:50,941 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(427): Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 on try #1 java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2765) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:473) at org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1678) at org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:586) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:425) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:335) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284) at org.apache.hadoop.hbase.backup.HFileArchiver.archiveStoreFiles(HFileArchiver.java:231) at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.removeStoreFiles(HRegionFileSystem.java:419) at org.apache.hadoop.hbase.regionserver.HStore.completeCompaction(HStore.java:1539) at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1139) at org.apache.hadoop.hbase.regionserver.HMobStore.compact(HMobStore.java:384) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1483) at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:478) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy20.setTimes(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:822) at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy21.setTimes(Unknown Source) at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294) at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2763) ... 20 more 2014-11-19 12:41:50,944 WARN [IPC Server handler 2 on 45640] security.UserGroupInformation(1645): PriviledgedActionException as:jenkins.hfs.0 (auth:SIMPLE) cause:java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 does not exist. 2014-11-19 12:41:50,945 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(427): Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 on try #2 java.io.FileNotFoundException: File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2765) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1300) at org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:473) at org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1678) at org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:586) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:425) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:335) at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284) at org.apache.hadoop.hbase.backup.HFileArchiver.archiveStoreFiles(HFileArchiver.java:231) at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.removeStoreFiles(HRegionFileSystem.java:419) at org.apache.hadoop.hbase.regionserver.HStore.completeCompaction(HStore.java:1539) at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1139) at org.apache.hadoop.hbase.regionserver.HMobStore.compact(HMobStore.java:384) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1483) at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:478) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File/Directory /user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2143) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2109) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:989) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy20.setTimes(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setTimes(ClientNamenodeProtocolTranslatorPB.java:822) at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy21.setTimes(Unknown Source) at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294) at com.sun.proxy.$Proxy26.setTimes(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2763) ... 20 more 2014-11-19 12:41:50,946 ERROR [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(433): Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 2014-11-19 12:41:50,946 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(336): Couldn't archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98 into backup directory: hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d 2014-11-19 12:41:50,946 WARN [RS:0;p0120:56624-smallCompactions-1416429710151] backup.HFileArchiver(290): Failed to complete archive of: [class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf, class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98]. Those files are still in the original location, and they may slow down reads. 2014-11-19 12:41:50,947 ERROR [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HStore(1542): Failed removing compacted files in d. Files we were trying to remove are [hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/14ea0c361e2348ab816879cd52147bc2, hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/024978dcf45641f2a39c9e79e4b183cf, hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/cbc9c78ebb0f45c196b43a8ba0743d98]; some of them may have been already removed java.io.IOException: Failed to archive/delete all the files for region:testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43., family:d into hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d. Something is probably awry on the filesystem. at org.apache.hadoop.hbase.backup.HFileArchiver.archiveStoreFiles(HFileArchiver.java:232) at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.removeStoreFiles(HRegionFileSystem.java:419) at org.apache.hadoop.hbase.regionserver.HStore.completeCompaction(HStore.java:1539) at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1139) at org.apache.hadoop.hbase.regionserver.HMobStore.compact(HMobStore.java:384) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1483) at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:478) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2014-11-19 12:41:50,948 INFO [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HStore(1234): Completed major compaction of 3 file(s) in d of testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. into f14951c08aca4f4bb256514c8dfcb98c(size=5.1 K), total size for store is 5.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2014-11-19 12:41:50,961 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(119): Process identifier=catalogtracker-on-hconnection-0x7f851a17 connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:50,962 DEBUG [pool-1-thread-1] catalog.CatalogTracker(197): Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@e006307 2014-11-19 12:41:50,972 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151-EventThread] zookeeper.ZooKeeperWatcher(310): abb7bdaf3d394988b4e6ea10ffbafb52-0x149c9ca0935000d, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/MOB/testMobMetrics:d-majorCompaction/abb7bdaf3d394988b4e6ea10ffbafb52 2014-11-19 12:41:50,990 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): catalogtracker-on-hconnection-0x7f851a17, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:50,992 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(387): catalogtracker-on-hconnection-0x7f851a17-0x149c9ca09350010 connected 2014-11-19 12:41:51,011 DEBUG [pool-1-thread-1] zookeeper.ZKUtil(428): catalogtracker-on-hconnection-0x7f851a17-0x149c9ca09350010, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/meta-region-server 2014-11-19 12:41:51,012 INFO [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.CompactSplitThread$CompactionRunner(480): Completed compaction: Request = regionName=testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43., storeName=d, fileCount=3, fileSize=14.7 K, priority=7, time=8912377433010774; duration=0sec 2014-11-19 12:41:51,013 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.CompactSplitThread$CompactionRunner(502): CompactSplitThread Status: compaction_queue=(0:0), split_queue=0, merge_queue=0 2014-11-19 12:41:51,033 INFO [PriorityRpcServer.handler=9,queue=0,port=56624] regionserver.HRegionServer(3937): Flushing testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:51,034 INFO [PriorityRpcServer.handler=9,queue=0,port=56624] regionserver.HRegion(1691): Started memstore flush for testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43., current region memstore size 168 2014-11-19 12:41:51,055 INFO [IPC Server handler 7 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741848_1024{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-e87da309-3799-4e45-ab72-12963622a667:NORMAL|FINALIZED]]} size 0 2014-11-19 12:41:51,058 INFO [IPC Server handler 9 on 45640] blockmanagement.BlockManager(1076): BLOCK* addToInvalidates: blk_1073741848_1024 127.0.0.1:52665 2014-11-19 12:41:51,070 INFO [IPC Server handler 2 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741849_1025{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|RBW]]} size 0 2014-11-19 12:41:51,072 INFO [PriorityRpcServer.handler=9,queue=0,port=56624] mob.DefaultMobStoreFlusher(129): Flushed, sequenceid=13, memsize=168, hasBloomFilter=true, into tmp file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/707b266c094d4facbf6d1c8804f777fc 2014-11-19 12:41:51,088 DEBUG [PriorityRpcServer.handler=9,queue=0,port=56624] regionserver.HRegionFileSystem(376): Committing store file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/707b266c094d4facbf6d1c8804f777fc as hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/707b266c094d4facbf6d1c8804f777fc 2014-11-19 12:41:51,106 INFO [PriorityRpcServer.handler=9,queue=0,port=56624] regionserver.HStore(882): Added hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/707b266c094d4facbf6d1c8804f777fc, entries=1, sequenceid=13, filesize=4.8 K 2014-11-19 12:41:51,107 INFO [PriorityRpcServer.handler=9,queue=0,port=56624] regionserver.HRegion(1837): Finished memstore flush of ~168/168, currentsize=0/0 for region testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. in 73ms, sequenceid=13, compaction requested=true 2014-11-19 12:41:51,107 DEBUG [PriorityRpcServer.handler=9,queue=0,port=56624] regionserver.CompactSplitThread(322): Small Compaction requested: system; Because: Compaction through user triggered flush; compaction_queue=(0:1), split_queue=0, merge_queue=0 2014-11-19 12:41:51,108 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] compactions.RatioBasedCompactionPolicy(92): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 10 blocking 2014-11-19 12:41:51,108 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] compactions.ExploringCompactionPolicy(122): Exploring compaction algorithm has selected 3 files of size 14823 starting at candidate #0 after considering 1 permutations with 1 in ratio 2014-11-19 12:41:51,108 DEBUG [pool-1-thread-1] catalog.CatalogTracker(221): Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@e006307 2014-11-19 12:41:51,108 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HStore(1464): 9a75b9728c76748d2963cbc975504a43 - d: Initiating major compaction 2014-11-19 12:41:51,108 INFO [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HRegion(1477): Starting compaction on d in region testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:51,109 INFO [RS:0;p0120:56624-smallCompactions-1416429710151] zookeeper.RecoverableZooKeeper(119): Process identifier=c079fd3f32d549d6a8864bb4f8d152ee connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:51,123 INFO [StoreOpener-9a75b9728c76748d2963cbc975504a43-1] compactions.CompactionConfiguration(88): size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired; major period 604800000, major jitter 0.500000 2014-11-19 12:41:51,124 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151-EventThread] zookeeper.ZooKeeperWatcher(310): c079fd3f32d549d6a8864bb4f8d152ee, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:51,126 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151-EventThread] zookeeper.ZooKeeperWatcher(387): c079fd3f32d549d6a8864bb4f8d152ee-0x149c9ca09350011 connected 2014-11-19 12:41:51,132 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] zookeeper.ZKUtil(428): c079fd3f32d549d6a8864bb4f8d152ee-0x149c9ca09350011, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/MOB/testMobMetrics:d-lock 2014-11-19 12:41:51,132 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] mob.MobZookeeper(108): Locked the column family testMobMetrics:d 2014-11-19 12:41:51,132 INFO [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HMobStore(365): Obtain the lock for the store[d], ready to perform the major compaction 2014-11-19 12:41:51,139 DEBUG [StoreOpener-9a75b9728c76748d2963cbc975504a43-1] regionserver.HStore(551): loaded hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/3781362dc6c94dffba9908c16871a682, isReference=false, isBulkLoadResult=false, seqid=7, majorCompaction=true 2014-11-19 12:41:51,140 INFO [RS:0;p0120:56624-smallCompactions-1416429710151] zookeeper.RecoverableZooKeeper(529): Node /hbase/MOB/testMobMetrics:d-majorCompaction already exists and this is not a retry 2014-11-19 12:41:51,148 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] zookeeper.ZKUtil(428): c079fd3f32d549d6a8864bb4f8d152ee-0x149c9ca09350011, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/MOB/testMobMetrics:d-majorCompaction/c079fd3f32d549d6a8864bb4f8d152ee 2014-11-19 12:41:51,149 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] mob.MobZookeeper(125): Unlocking the column family testMobMetrics:d 2014-11-19 12:41:51,150 DEBUG [StoreOpener-9a75b9728c76748d2963cbc975504a43-1] regionserver.HStore(551): loaded hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/707b266c094d4facbf6d1c8804f777fc, isReference=false, isBulkLoadResult=false, seqid=13, majorCompaction=false 2014-11-19 12:41:51,156 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151-EventThread] zookeeper.ZooKeeperWatcher(310): c079fd3f32d549d6a8864bb4f8d152ee-0x149c9ca09350011, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/MOB/testMobMetrics:d-lock 2014-11-19 12:41:51,157 INFO [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HStore(1104): Starting compaction of 3 file(s) in d of testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. into tmpdir=hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp, totalSize=14.5 K 2014-11-19 12:41:51,157 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/3781362dc6c94dffba9908c16871a682, keycount=3, bloomtype=ROW, size=4.8 K, encoding=NONE, seqNum=7, earliestPutTs=1416429709218 2014-11-19 12:41:51,158 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/f59077c0a0e841be97e370d475ee90a1, keycount=1, bloomtype=ROW, size=4.8 K, encoding=NONE, seqNum=10, earliestPutTs=1416429710761 2014-11-19 12:41:51,158 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/707b266c094d4facbf6d1c8804f777fc, keycount=1, bloomtype=ROW, size=4.8 K, encoding=NONE, seqNum=13, earliestPutTs=1416429710958 2014-11-19 12:41:51,161 DEBUG [StoreOpener-9a75b9728c76748d2963cbc975504a43-1] regionserver.HStore(551): loaded hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/f14951c08aca4f4bb256514c8dfcb98c, isReference=false, isBulkLoadResult=false, seqid=7, majorCompaction=true 2014-11-19 12:41:51,174 DEBUG [StoreOpener-9a75b9728c76748d2963cbc975504a43-1] regionserver.HStore(551): loaded hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/f59077c0a0e841be97e370d475ee90a1, isReference=false, isBulkLoadResult=false, seqid=10, majorCompaction=false 2014-11-19 12:41:51,177 DEBUG [pool-1-thread-1] regionserver.HRegion(3192): Found 0 recovered edits file(s) under hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43 2014-11-19 12:41:51,182 INFO [pool-1-thread-1] regionserver.HRegion(742): Onlined 9a75b9728c76748d2963cbc975504a43; next sequenceid=14 2014-11-19 12:41:51,182 DEBUG [pool-1-thread-1] compactions.RatioBasedCompactionPolicy(92): Selecting compaction from 4 store files, 0 compacting, 4 eligible, 10 blocking 2014-11-19 12:41:51,183 DEBUG [pool-1-thread-1] regionserver.HStore(1464): 9a75b9728c76748d2963cbc975504a43 - d: Initiating major compaction 2014-11-19 12:41:51,183 INFO [pool-1-thread-1] regionserver.HRegion(1477): Starting compaction on d in region testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. 2014-11-19 12:41:51,184 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(119): Process identifier=584915ecb8b84e3db4abc9f8f9def7fd connecting to ZooKeeper ensemble=localhost:64128 2014-11-19 12:41:51,191 INFO [IPC Server handler 1 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741850_1026{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|RBW]]} size 4489 2014-11-19 12:41:51,208 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): 584915ecb8b84e3db4abc9f8f9def7fd, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2014-11-19 12:41:51,209 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(387): 584915ecb8b84e3db4abc9f8f9def7fd-0x149c9ca09350012 connected 2014-11-19 12:41:51,223 DEBUG [pool-1-thread-1] zookeeper.ZKUtil(428): 584915ecb8b84e3db4abc9f8f9def7fd-0x149c9ca09350012, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/MOB/testMobMetrics:d-lock 2014-11-19 12:41:51,224 DEBUG [pool-1-thread-1] mob.MobZookeeper(108): Locked the column family testMobMetrics:d 2014-11-19 12:41:51,224 INFO [pool-1-thread-1] regionserver.HMobStore(365): Obtain the lock for the store[d], ready to perform the major compaction 2014-11-19 12:41:51,231 INFO [pool-1-thread-1] zookeeper.RecoverableZooKeeper(529): Node /hbase/MOB/testMobMetrics:d-majorCompaction already exists and this is not a retry 2014-11-19 12:41:51,240 DEBUG [pool-1-thread-1] zookeeper.ZKUtil(428): 584915ecb8b84e3db4abc9f8f9def7fd-0x149c9ca09350012, quorum=localhost:64128, baseZNode=/hbase Set watcher on existing znode=/hbase/MOB/testMobMetrics:d-majorCompaction/584915ecb8b84e3db4abc9f8f9def7fd 2014-11-19 12:41:51,242 DEBUG [pool-1-thread-1] mob.MobZookeeper(125): Unlocking the column family testMobMetrics:d 2014-11-19 12:41:51,248 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): 584915ecb8b84e3db4abc9f8f9def7fd-0x149c9ca09350012, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/MOB/testMobMetrics:d-lock 2014-11-19 12:41:51,249 INFO [pool-1-thread-1] regionserver.HStore(1104): Starting compaction of 4 file(s) in d of testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. into tmpdir=hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp, totalSize=19.6 K 2014-11-19 12:41:51,249 DEBUG [pool-1-thread-1] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/f14951c08aca4f4bb256514c8dfcb98c, keycount=3, bloomtype=ROW, size=5.1 K, encoding=NONE, seqNum=7, earliestPutTs=1416429709218 2014-11-19 12:41:51,250 DEBUG [pool-1-thread-1] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/3781362dc6c94dffba9908c16871a682, keycount=3, bloomtype=ROW, size=4.8 K, encoding=NONE, seqNum=7, earliestPutTs=1416429709218 2014-11-19 12:41:51,250 DEBUG [pool-1-thread-1] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/f59077c0a0e841be97e370d475ee90a1, keycount=1, bloomtype=ROW, size=4.8 K, encoding=NONE, seqNum=10, earliestPutTs=1416429710761 2014-11-19 12:41:51,251 DEBUG [pool-1-thread-1] compactions.Compactor(157): Compacting hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/707b266c094d4facbf6d1c8804f777fc, keycount=1, bloomtype=ROW, size=4.8 K, encoding=NONE, seqNum=13, earliestPutTs=1416429710958 2014-11-19 12:41:51,285 INFO [IPC Server handler 7 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741851_1027{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|RBW]]} size 0 2014-11-19 12:41:51,304 INFO [pool-1-thread-1] regionserver.HMobStore(223): Renaming flushed file from hdfs://localhost:45640/user/jenkins/hbase/mobdir/.tmp/d41d8cd98f00b204e9800998ecf8427e20141119417763473e1149b0b990df14d883c2e3 to hdfs://localhost:45640/user/jenkins/hbase/mobdir/data/default/testMobMetrics/7799e477d8a400bf8295a1af7a73a13b/d/d41d8cd98f00b204e9800998ecf8427e20141119417763473e1149b0b990df14d883c2e3 2014-11-19 12:41:51,322 INFO [IPC Server handler 4 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741852_1028{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-e87da309-3799-4e45-ab72-12963622a667:NORMAL|FINALIZED]]} size 0 2014-11-19 12:41:51,348 DEBUG [pool-1-thread-1] regionserver.HRegionFileSystem(376): Committing store file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/4454bb5423aa421fbaed8403f2652db9 as hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/4454bb5423aa421fbaed8403f2652db9 2014-11-19 12:41:51,529 DEBUG [pool-1-thread-1] regionserver.HStore(1535): Removing store files after compaction... 2014-11-19 12:41:51,538 DEBUG [pool-1-thread-1] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/f14951c08aca4f4bb256514c8dfcb98c, to hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/f14951c08aca4f4bb256514c8dfcb98c 2014-11-19 12:41:51,542 DEBUG [pool-1-thread-1] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/3781362dc6c94dffba9908c16871a682, to hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/3781362dc6c94dffba9908c16871a682 2014-11-19 12:41:51,546 DEBUG [pool-1-thread-1] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/f59077c0a0e841be97e370d475ee90a1, to hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/f59077c0a0e841be97e370d475ee90a1 2014-11-19 12:41:51,550 DEBUG [pool-1-thread-1] backup.HFileArchiver(438): Finished archiving from class org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, file:hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/707b266c094d4facbf6d1c8804f777fc, to hdfs://localhost:45640/user/jenkins/hbase/archive/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/707b266c094d4facbf6d1c8804f777fc 2014-11-19 12:41:51,551 INFO [pool-1-thread-1] regionserver.HStore(1234): Completed major compaction of 4 file(s) in d of testMobMetrics,,1416429708391.9a75b9728c76748d2963cbc975504a43. into 4454bb5423aa421fbaed8403f2652db9(size=5.3 K), total size for store is 5.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2014-11-19 12:41:51,556 DEBUG [pool-1-thread-1-EventThread] zookeeper.ZooKeeperWatcher(310): 584915ecb8b84e3db4abc9f8f9def7fd-0x149c9ca09350012, quorum=localhost:64128, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/MOB/testMobMetrics:d-majorCompaction/584915ecb8b84e3db4abc9f8f9def7fd 2014-11-19 12:41:51,595 INFO [IPC Server handler 3 on 45640] blockmanagement.BlockManager(1076): BLOCK* addToInvalidates: blk_1073741850_1026 127.0.0.1:52665 2014-11-19 12:41:51,605 INFO [IPC Server handler 4 on 45640] blockmanagement.BlockManager(2418): BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:52665 is added to blk_1073741853_1029{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b0b4a7ef-0eb2-45a5-9f70-164c4909df04:NORMAL|FINALIZED]]} size 0 2014-11-19 12:41:51,610 INFO [pool-1-thread-1] hbase.ResourceChecker(171): after: regionserver.TestRegionServerMetrics#testMobMetrics Thread=229 (was 203) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1408732489_139 at /127.0.0.1:46662 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:56) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1408732489_139 at /127.0.0.1:46617 [Waiting for operation #13] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:56) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1408732489_139 at /127.0.0.1:46605 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:56) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS:0;p0120:56624-smallCompactions-1416429710151-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1408732489_139 at /127.0.0.1:46652 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:56) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1408732489_139 at /127.0.0.1:46598 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:56) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1408732489_139 at /127.0.0.1:46608 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:56) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS_OPEN_REGION-p0120:56624-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS:0;p0120:56624-smallCompactions-1416429710151 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2183) org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2148) org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.finishClose(AbstractHFileWriter.java:250) org.apache.hadoop.hbase.io.hfile.HFileWriterV3.finishClose(HFileWriterV3.java:228) org.apache.hadoop.hbase.io.hfile.HFileWriterV2.close(HFileWriterV2.java:402) org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:974) org.apache.hadoop.hbase.regionserver.compactions.Compactor.appendMetadataAndCloseWriter(Compactor.java:318) org.apache.hadoop.hbase.mob.DefaultMobCompactor.performCompaction(DefaultMobCompactor.java:222) org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:75) org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121) org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1113) org.apache.hadoop.hbase.regionserver.HMobStore.compact(HMobStore.java:384) org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1483) org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:478) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1408732489_139 at /127.0.0.1:46604 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:56) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: RS:0;p0120:56624-smallCompactions-1416429710151-SendThread(localhost:64128) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:338) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) Potentially hanging thread: AM.ZK.Worker-pool2-t11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AM.ZK.Worker-pool2-t9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1408732489_139 at /127.0.0.1:46606 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:56) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1408732489_139 at /127.0.0.1:46522 [Waiting for operation #15] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:56) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AM.ZK.Worker-pool2-t12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AM.ZK.Worker-pool2-t13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: IPC Client (1530616708) connection to p0120.sjc.cloudera.com/172.17.188.30:33095 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.ipc.RpcClient$Connection.waitForWork(RpcClient.java:678) org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:726) Potentially hanging thread: htable-pool18-t1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: AM.ZK.Worker-pool2-t10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1408732489_139 at /127.0.0.1:46609 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:56) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1408732489_139 at /127.0.0.1:46585 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:56) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202) java.lang.Thread.run(Thread.java:745) Potentially hanging thread: Thread-229 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:512) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1408732489_139 at /127.0.0.1:46615 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:235) java.io.BufferedInputStream.read(BufferedInputStream.java:254) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:56) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202) java.lang.Thread.run(Thread.java:745) - Thread LEAK? -, OpenFileDescriptor=383 (was 313) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=32768 (was 32768), SystemLoadAverage=397 (was 397), ProcessCount=242 (was 242), AvailableMemoryMB=4364 (was 4581), ConnectionCount=4 (was 4) 2014-11-19 12:41:51,625 DEBUG [RS:0;p0120:56624-smallCompactions-1416429710151] regionserver.HRegionFileSystem(376): Committing store file hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/.tmp/52ee171ab5614ab0a6bd30be1491c80b as hdfs://localhost:45640/user/jenkins/hbase/data/default/testMobMetrics/9a75b9728c76748d2963cbc975504a43/d/52ee171ab5614ab0a6bd30be1491c80b Help us localize this page Page generated: Nov 19, 2014 2:15:05 PMREST APIJenkins ver. 1.532.2