Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Fixed
-
0.9.0
-
None
Description
When a query execution is successfully finished, its status is written into a query history. Once the history is written, the tajo master gets the status of the finished query from that written history.
Here, when the tajo master reads the query history, an I/O exception occurs and the tajo cli stops forever. Here is the full stack trace.
2015-01-09 00:27:54,019 INFO org.apache.tajo.master.querymaster.QueryInProgress: Stop query:q_1420730837752_0001 2015-01-09 00:27:54,019 INFO org.apache.tajo.master.rm.TajoWorkerResourceManager: Release Resource: 0.0,512 2015-01-09 00:27:54,019 INFO org.apache.tajo.master.rm.TajoWorkerResourceManager: Released QueryMaster (q_1420730837752_0001) resource. 2015-01-09 00:27:54,019 INFO org.apache.tajo.master.querymaster.QueryInProgress: q_1420730837752_0001 QueryMaster stopped 2015-01-09 00:27:54,032 INFO org.apache.tajo.util.history.HistoryWriter: Create query history file: hdfs://localhost:7020/tmp/tajo-jihoon/staging/history/20150109/query-list/query-list-002754.hist 2015-01-09 00:27:54,899 ERROR org.apache.tajo.util.history.HistoryReader: Reading error:hdfs://localhost:7020/tmp/tajo-jihoon/staging/history/20150107/query-list/query-list-131932.hist, Cannot obtain block length for LocatedBlock{BP-1604697128-192.168.0.12-1412676388616:blk_1073741964_1140; getBlockSize()=1356; corrupt=false; offset=0; locs=[127.0.0.1:50010]} java.io.IOException: Cannot obtain block length for LocatedBlock{BP-1604697128-192.168.0.12-1412676388616:blk_1073741964_1140; getBlockSize()=1356; corrupt=false; offset=0; locs=[127.0.0.1:50010]} at org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:350) at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:294) at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:231) at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:224) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1295) at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300) at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:296) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:296) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:764) at org.apache.tajo.util.history.HistoryReader.getQueries(HistoryReader.java:88) at org.apache.tajo.util.history.HistoryReader.getQueryInfo(HistoryReader.java:294) at org.apache.tajo.master.querymaster.QueryJobManager.getFinishedQuery(QueryJobManager.java:131) at org.apache.tajo.master.TajoMasterClientService$TajoMasterClientProtocolServiceHandler.getQueryStatus(TajoMasterClientService.java:471) at org.apache.tajo.ipc.TajoMasterClientProtocol$TajoMasterClientProtocolService$2.callBlockingMethod(TajoMasterClientProtocol.java:551) at org.apache.tajo.rpc.BlockingRpcServer$ServerHandler.messageReceived(BlockingRpcServer.java:103) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)