=== hbase hbck === [...] ERROR: Region { meta => documents,7128586022887322720,1363696791400.79c619508659018ff3ef0887611eb8f7., hdfs => hdfs://nameservice1/hbase/documents/79c619508659018ff3ef0887611eb8f7, deployed => } not deployed on any region server. [...] === hbase hbck -repair === [...] 13/04/30 16:49:35 INFO util.HBaseFsckRepair: Region still in transition, waiting for it to become assigned: {NAME => 'documents,7128586022887322720,1363696791400.79c619508659018ff3ef0887611eb8f7.', STARTKEY => '7128586022887322720', ENDKEY => '7130716361635801616', ENCODED => 79c619508659018ff3ef0887611eb8f7,} Exception in thread "main" java.io.IOException: Region {NAME => 'documents,7128586022887322720,1363696791400.79c619508659018ff3ef0887611eb8f7.', STARTKEY => '7128586022887322720', ENDKEY => '7130716361635801616', ENCODED => 79c619508659018ff3ef0887611eb8f7,} failed to move out of transition within timeout 120000ms at org.apache.hadoop.hbase.util.HBaseFsckRepair.waitUntilAssigned(HBaseFsckRepair.java:140) at org.apache.hadoop.hbase.util.HBaseFsck.tryAssignmentRepair(HBaseFsck.java:1380) at org.apache.hadoop.hbase.util.HBaseFsck.checkRegionConsistency(HBaseFsck.java:1500) at org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixConsistency(HBaseFsck.java:1215) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:396) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:416) at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3375) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3205) [...] === MASTER === 2013-03-18 19:45:00,014 WARN org.apache.hadoop.hbase.master.AssignmentManager: Region 5b9c16898a371de58f31f0bdf86b1f8b not found on server mia-node04.miacluster.priv,60020,1363275286867; failed processing 2013-03-18 19:45:03,308 ERROR org.apache.hadoop.hbase.master.HMaster: Region server mia-node04.miacluster.priv,60020,1363275286867 reported a fatal error: 137644 ABORTING region server mia-node04.miacluster.priv,60020,1363275286867: Abort; we got an error after point-of-no-return 2013-03-18 19:45:09,204 INFO org.apache.hadoop.hbase.zookeeper.RegionServerTracker: RegionServer ephemeral node deleted, processing expiration [mia-node04.miacluster.priv,60020,1363275286867] 137646 2013-03-18 19:45:09,274 mia-node08.miacluster.priv INFO org.apache.hadoop.hbase.master.handler.ServerShutdownHandler: Splitting logs for mia-node04.miacluster.priv,60020,1363275286867 2013-03-18 19:45:09,392 INFO org.apache.hadoop.hbase.master.SplitLogManager: dead splitlog workers [mia-node04.miacluster.priv,60020,1363275286867] 137648 2013-03-18 19:45:09,395 mia-node08.miacluster.priv INFO org.apache.hadoop.hbase.master.SplitLogManager: started splitting logs in [hdfs://nameservice1/hbase/.logs/mia-node04.miacluster.priv,60020,1363275286867-splitting] === REGIONSERVER === 2013-03-18 18:53:10,015 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~117.3m/122971240, currentsize=0/0 for region documents,7128586022887322720,1351622415334.5b9c16898a371de58f31f0bdf86b1f8b. in 167311ms, sequenceid=4076756257, compaction requested=false 2013-03-18 18:53:10,164 INFO org.apache.hadoop.hbase.regionserver.SplitTransaction: Starting split of region documents,7128586022887322720,1351622415334.5b9c16898a371de58f31f0bdf86b1f8b. 2013-03-18 18:53:10,780 INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed documents,7128586022887322720,1351622415334.5b9c16898a371de58f31f0bdf86b1f8b. 2013-03-18 18:53:10,922 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Roll /hbase/.logs/mia-node04.miacluster.priv,60020,1363275286867/mia-node04.miacluster.priv%2C60020%2C1363275286867.1363629007270, entries=108, filesize=197340654. for /hbase/.logs/mia-node04.miacluster.priv,60020,1363275286867/mia-node04.miacluster.priv%2C60020%2C1363275286867.1363629190015 2013-03-18 18:53:10,922 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: moving old hlog file /hbase/.logs/mia-node04.miacluster.priv,60020,1363275286867/mia-node04.miacluster.priv%2C60020%2C136327528 6867.1363625750173 whose highest sequenceid is 4076753663 to /hbase/.oldlogs/mia-node04.miacluster.priv%2C60020%2C1363275286867.1363625750173 2013-03-18 18:53:41,198 INFO org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup of failed split of documents,7128586022887322720,1351622415334.5b9c16898a371de58f31f0bdf86b1f8b.; Took too long to split the files and create the references, aborting split java.io.IOException: Took too long to split the files and create the references, aborting split at org.apache.hadoop.hbase.regionserver.SplitTransaction.splitStoreFiles(SplitTransaction.java:613) at org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:285) at org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450) at org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:67) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2013-03-18 18:53:47,847 INFO org.apache.hadoop.hbase.regionserver.SplitTransaction: Cleaned up old failed split transaction detritus: hdfs://nameservice1/hbase/documents/5b9c16898a371de58f31f0bdf86b1f8b/ splits 2013-03-18 18:53:55,819 INFO org.apache.hadoop.hbase.regionserver.HRegion: Onlined documents,7128586022887322720,1351622415334.5b9c16898a371de58f31f0bdf86b1f8b.; next sequenceid=4076756258 2013-03-18 18:53:59,108 INFO org.apache.hadoop.hbase.regionserver.SplitRequest: Successful rollback of failed split of documents,7128586022887322720,1351622415334.5b9c16898a371de58f31f0bdf86b1f8b. [...] 2013-03-18 19:44:59,963 INFO org.apache.hadoop.hbase.regionserver.SplitTransaction: Starting split of region documents,7128586022887322720,1351622415334.5b9c16898a371de58f31f0bdf86b1f8b. 2013-03-18 19:45:00,066 INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed documents,7128586022887322720,1351622415334.5b9c16898a371de58f31f0bdf86b1f8b. 2013-03-18 19:45:01,286 INFO org.apache.hadoop.hbase.regionserver.HRegion: Setting up tabledescriptor config now ... 2013-03-18 19:45:01,377 INFO org.apache.hadoop.hbase.regionserver.HRegion: Setting up tabledescriptor config now ... 2013-03-18 19:45:01,435 INFO org.apache.hadoop.hbase.catalog.MetaEditor: Offlined parent region documents,7128586022887322720,1351622415334.5b9c16898a371de58f31f0bdf86b1f8b. in META 2013-03-18 19:45:03,277 INFO org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup of failed split of documents,7128586022887322720,1351622415334.5b9c16898a371de58f31f0bdf86b1f8b.; Failed mia-node04.miacluster.priv,60020,1363275286867-daughterOpener=939c1e9d10cc4e97d7284025f20298fb java.io.IOException: Failed mia-node04.miacluster.priv,60020,1363275286867-daughterOpener=939c1e9d10cc4e97d7284025f20298fb at org.apache.hadoop.hbase.regionserver.SplitTransaction.openDaughters(SplitTransaction.java:363) at org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:451) at org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:67) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.FileNotFoundException: File does not exist: /hbase/documents/5b9c16898a371de58f31f0bdf86b1f8b/d/0707b1ec4c6b41cf9174e0d2a1785fe9 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1239) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1165) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1147) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:383) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:170) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44064) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:935) at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:923) at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:157) at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:124) at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:117) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1080) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:245) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:78) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:665) at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:435) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1026) at org.apache.hadoop.hbase.io.HalfStoreFileReader.(HalfStoreFileReader.java:65) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:482) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:566) at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:293) at org.apache.hadoop.hbase.regionserver.Store.(Store.java:230) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2534) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:454) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3308) at org.apache.hadoop.hbase.regionserver.SplitTransaction.openDaughterRegion(SplitTransaction.java:504) at org.apache.hadoop.hbase.regionserver.SplitTransaction$DaughterOpener.run(SplitTransaction.java:484) ... 1 more Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /hbase/documents/5b9c16898a371de58f31f0bdf86b1f8b/d/0707b1ec4c6b41cf9174e0d2a1785fe9 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1239) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1192) 2013-03-18 19:45:03,277 mia-node04.miacluster.priv at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1165) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1147) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:383) 2013-03-18 19:45:03,277 mia-node04.miacluster.priv at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:170) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44064) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) 2013-03-18 19:45:03,277 mia-node04.miacluster.priv at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) at org.apache.hadoop.ipc.Client.call(Client.java:1160) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at $Proxy13.getBlockLocations(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:154) at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) at $Proxy14.getBlockLocations(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:933) ... 21 more 2013-03-18 19:45:03,297 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server mia-node04.miacluster.priv,60020,1363275286867: Abort; we got an error after point-of-no-return 2013-03-18 19:45:03,297 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [] 2013-03-18 19:45:03,297 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics: [...] 2013-03-18 19:45:03,549 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Abort; we got an error after point-of-no-return 2013-03-18 19:45:06,189 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60020 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020: exiting 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020: exiting 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020: exiting 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020: exiting 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020: exiting 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020: exiting 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020: exiting 2013-03-18 19:45:06,190 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020: exiting 2013-03-18 19:45:06,190 INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker: Sending interrupt to stop the worker thread 2013-03-18 19:45:06,519 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020: exiting 2013-03-18 19:45:06,519 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020: exiting 2013-03-18 19:45:06,519 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 1 on 60020: exiting 2013-03-18 19:45:06,519 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 2 on 60020: exiting 2013-03-18 19:45:06,519 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 0 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020: exiting 2013-03-18 19:45:06,416 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020: exiting 2013-03-18 19:45:06,661 INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker: SplitLogWorker interrupted while waiting for task, exiting: java.lang.InterruptedException 2013-03-18 19:45:06,729 INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker: SplitLogWorker mia-node04.miacluster.priv,60020,1363275286867 exiting 2013-03-18 19:45:06,521 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Stopping infoServer 2013-03-18 19:45:06,820 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60030 2013-03-18 19:45:06,974 INFO org.apache.hadoop.hbase.regionserver.LogRoller: LogRoller exiting. [...] 2013-03-19 13:39:50,846 INFO org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Opening of region {NAME => 'documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb.', STARTKEY => '7128586022887322720', ENDKEY => '7129615683981220941', ENCODED => 939c1e9d10cc4e97d7284025f20298fb,} failed, marking as FAILED_OPEN in ZK 2013-03-19 13:39:50,888 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Received request to open region: documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb. 2013-03-19 13:39:50,984 INFO org.apache.hadoop.hbase.regionserver.HRegion: Setting up tabledescriptor config now ... 2013-03-19 13:39:50,993 ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of region=documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb. java.io.FileNotFoundException: File does not exist: /hbase/documents/5b9c16898a371de58f31f0bdf86b1f8b/d/0707b1ec4c6b41cf9174e0d2a1785fe9 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1239) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1165) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1147) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:383) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:170) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44064) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:935) at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:923) at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:157) at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:124) at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:117) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1080) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:245) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:78) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:665) at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:435) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1026) at org.apache.hadoop.hbase.io.HalfStoreFileReader.(HalfStoreFileReader.java:65) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:482) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:566) at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:293) at org.apache.hadoop.hbase.regionserver.Store.(Store.java:230) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2534) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:454) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3308) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3256) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:331) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:107) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /hbase/documents/5b9c16898a371de58f31f0bdf86b1f8b/d/0707b1ec4c6b41cf9174e0d2a1785fe9 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1239) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1165) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1147) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:383) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:170) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44064) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) at org.apache.hadoop.ipc.Client.call(Client.java:1160) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at $Proxy13.getBlockLocations(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:154) at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) at $Proxy14.getBlockLocations(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:933) ... 25 more 2013-03-19 13:39:50,994 INFO org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Opening of region {NAME => 'documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb.', STARTKEY => '7128586022887322720', ENDKEY => '7129615683981220941', ENCODED => 939c1e9d10cc4e97d7284025f20298fb,} failed, marking as FAILED_OPEN in ZK 2013-03-19 13:39:51,013 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Received request to open region: documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb. 2013-03-19 13:39:51,096 INFO org.apache.hadoop.hbase.regionserver.HRegion: Setting up tabledescriptor config now ... 2013-03-19 13:39:51,105 ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of region=documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb. java.io.FileNotFoundException: File does not exist: /hbase/documents/5b9c16898a371de58f31f0bdf86b1f8b/d/0707b1ec4c6b41cf9174e0d2a1785fe9 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1239) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1165) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1147) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:383) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:170) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44064) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:935) at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:923) at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:157) at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:124) at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:117) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1080) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:245) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:78) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:665) at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:435) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1026) at org.apache.hadoop.hbase.io.HalfStoreFileReader.(HalfStoreFileReader.java:65) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:482) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:566) at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:293) at org.apache.hadoop.hbase.regionserver.Store.(Store.java:230) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2534) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:454) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3308) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3256) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:331) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:107) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /hbase/documents/5b9c16898a371de58f31f0bdf86b1f8b/d/0707b1ec4c6b41cf9174e0d2a1785fe9 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1239) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1165) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1147) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:383) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:170) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44064) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) at org.apache.hadoop.ipc.Client.call(Client.java:1160) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at $Proxy13.getBlockLocations(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:154) at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) at $Proxy14.getBlockLocations(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:933) ... 25 more 2013-03-19 13:39:51,106 INFO org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Opening of region {NAME => 'documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb.', STARTKEY => '7128586022887322720', ENDKEY => '7129615683981220941', ENCODED => 939c1e9d10cc4e97d7284025f20298fb,} failed, marking as FAILED_OPEN in ZK 2013-03-19 13:39:51,143 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Received request to open region: documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb. 2013-03-19 13:39:51,234 INFO org.apache.hadoop.hbase.regionserver.HRegion: Setting up tabledescriptor config now ... 2013-03-19 13:39:51,243 ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of region=documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb. java.io.FileNotFoundException: File does not exist: /hbase/documents/5b9c16898a371de58f31f0bdf86b1f8b/d/0707b1ec4c6b41cf9174e0d2a1785fe9 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1239) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1165) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1147) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:383) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:170) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44064) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:935) at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:923) at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:157) at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:124) at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:117) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1080) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:245) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:78) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:665) at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:435) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1026) at org.apache.hadoop.hbase.io.HalfStoreFileReader.(HalfStoreFileReader.java:65) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:482) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:566) at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:293) at org.apache.hadoop.hbase.regionserver.Store.(Store.java:230) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2534) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:454) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3308) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3256) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:331) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:107) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /hbase/documents/5b9c16898a371de58f31f0bdf86b1f8b/d/0707b1ec4c6b41cf9174e0d2a1785fe9 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1239) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1165) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1147) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:383) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:170) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44064) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) at org.apache.hadoop.ipc.Client.call(Client.java:1160) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at $Proxy13.getBlockLocations(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:154) at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) at $Proxy14.getBlockLocations(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:933) ... 25 more 2013-03-19 13:39:51,244 mia-node04.miacluster.priv INFO org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Opening of region {NAME => 'documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb.', STARTKEY => '7128586022887322720', ENDKEY => '7129615683981220941', ENCODED => 939c1e9d10cc4e97d7284025f20298fb,} failed, marking as FAILED_OPEN in ZK 2013-03-19 13:39:51,260 mia-node04.miacluster.priv INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Received request to open region: documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb. 2013-03-19 13:39:51,348 mia-node04.miacluster.priv INFO org.apache.hadoop.hbase.regionserver.HRegion: Setting up tabledescriptor config now ... 2013-03-19 13:39:51,358 mia-node04.miacluster.priv ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of region=documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb. java.io.FileNotFoundException: File does not exist: /hbase/documents/5b9c16898a371de58f31f0bdf86b1f8b/d/0707b1ec4c6b41cf9174e0d2a1785fe9 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1239) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1165) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1147) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:383) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:170) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44064) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:935) at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:923) at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:157) at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:124) at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:117) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1080) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:245) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:78) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:665) at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:435) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1026) at org.apache.hadoop.hbase.io.HalfStoreFileReader.(HalfStoreFileReader.java:65) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:482) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:566) at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:293) at org.apache.hadoop.hbase.regionserver.Store.(Store.java:230) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2534) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:454) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3308) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3256) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:331) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:107) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /hbase/documents/5b9c16898a371de58f31f0bdf86b1f8b/d/0707b1ec4c6b41cf9174e0d2a1785fe9 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1239) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1165) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1147) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:383) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:170) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44064) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) at org.apache.hadoop.ipc.Client.call(Client.java:1160) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at $Proxy13.getBlockLocations(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:154) at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) at $Proxy14.getBlockLocations(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:933) ... 25 more 2013-03-19 13:39:51,358 INFO org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Opening of region {NAME => 'documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb.', STARTKEY => '7128586022887322720', ENDKEY => '7129615683981220941', ENCODED => 939c1e9d10cc4e97d7284025f20298fb,} failed, marking as FAILED_OPEN in ZK 2013-03-19 13:39:51,382 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Received close region: documents,7128586022887322720,1351622415334.5b9c16898a371de58f31f0bdf86b1f8b.. Version of ZK closing node:-1 2013-03-19 13:39:51,382 WARN org.apache.hadoop.hbase.regionserver.HRegionServer: Received close for region we are not serving; 5b9c16898a371de58f31f0bdf86b1f8b 2013-03-19 13:40:55,274 WARN org.apache.hadoop.hbase.monitoring.TaskMonitor: Status Initializing region documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb.: status=Instantiating store for column family {NAME => 'd', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, state=RUNNING, startTime=1363696694039, completionTime=-1 appears to have been leaked 2013-03-19 13:40:55,275 WARN org.apache.hadoop.hbase.monitoring.TaskMonitor: Status Initializing region documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb.: status=Instantiating store for column family {NAME => 'd', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, state=RUNNING, startTime=1363696694176, completionTime=-1 appears to have been leaked 2013-03-19 13:40:55,275 WARN org.apache.hadoop.hbase.monitoring.TaskMonitor: Status Initializing region documents,7128586022887322720,1363632299963.939c1e9d10cc4e97d7284025f20298fb.: status=Instantiating store for column family {NAME => 'd', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, state=RUNNING, startTime=1363696694302, completionTime=-1 appears to have been leaked [...]