Uploaded image for project: 'Apache Drill'
  1. Apache Drill
  2. DRILL-4298

SYSTEM ERROR: ChannelClosedException

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 1.5.0
    • 1.7.0
    • Execution - RPC
    • None

    Description

      1.5.0-SNAPSHOT 2f0e3f27e630d5ac15cdaef808564e01708c3c55

      Running functional regression, hit this error, seems random and not associated with any particular query.

      From client side:

      1/5          create table `existing_partition_pruning/lineitempart` partition by (dir0) as select * from dfs.`/drill/testdata/partition_pruning/dfs/lineitempart`;
      Error: SYSTEM ERROR: ChannelClosedException: Channel closed /10.10.100.171:31010 <--> /10.10.100.171:33713.
      
      Fragment 0:0
      
      [Error Id: 772d90b8-c5e6-4ecc-8776-68ccc6b57d49 on drillats1.qa.lab:31010] (state=,code=0)
      java.sql.SQLException: SYSTEM ERROR: ChannelClosedException: Channel closed /10.10.100.171:31010 <--> /10.10.100.171:33713.
      
      Fragment 0:0
      
      [Error Id: 772d90b8-c5e6-4ecc-8776-68ccc6b57d49 on drillats1.qa.lab:31010]
      	at org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:247)
      	at org.apache.drill.jdbc.impl.DrillCursor.next(DrillCursor.java:321)
      	at net.hydromatic.avatica.AvaticaResultSet.next(AvaticaResultSet.java:187)
      	at org.apache.drill.jdbc.impl.DrillResultSetImpl.next(DrillResultSetImpl.java:172)
      	at sqlline.IncrementalRows.hasNext(IncrementalRows.java:62)
      	at sqlline.TableOutputFormat$ResizingRowsProvider.next(TableOutputFormat.java:87)
      	at sqlline.TableOutputFormat.print(TableOutputFormat.java:118)
      	at sqlline.SqlLine.print(SqlLine.java:1593)
      	at sqlline.Commands.execute(Commands.java:852)
      	at sqlline.Commands.sql(Commands.java:751)
      	at sqlline.SqlLine.dispatch(SqlLine.java:746)
      	at sqlline.SqlLine.runCommands(SqlLine.java:1651)
      	at sqlline.Commands.run(Commands.java:1304)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:606)
      	at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
      	at sqlline.SqlLine.dispatch(SqlLine.java:742)
      	at sqlline.SqlLine.initArgs(SqlLine.java:553)
      	at sqlline.SqlLine.begin(SqlLine.java:596)
      	at sqlline.SqlLine.start(SqlLine.java:375)
      	at sqlline.SqlLine.main(SqlLine.java:268)
      Caused by: org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: ChannelClosedException: Channel closed /10.10.100.171:31010 <--> /10.10.100.171:33713.
      
      Fragment 0:0
      
      [Error Id: 772d90b8-c5e6-4ecc-8776-68ccc6b57d49 on drillats1.qa.lab:31010]
      	at org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:119)
      	at org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:113)
      	at org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
      	at org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
      	at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:67)
      	at org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:374)
      	at org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
      	at org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:252)
      	at org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123)
      	at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:285)
      	at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:257)
      	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
      	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
      	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
      	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
      	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
      	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
      	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
      	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
      	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
      	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
      	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
      	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
      	at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
      	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
      	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
      	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
      	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:618)
      	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:329)
      	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:250)
      	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
      	at java.lang.Thread.run(Thread.java:744)
      Aborting command set because "force" is false and command failed: "create table `existing_partition_pruning/lineitempart` partition by (dir0) as select * from dfs.`/drill/testdata/partition_pruning/dfs/lineitempart`;"
      Closing: org.apache.drill.jdbc.impl.DrillConnectionImpl
      
      Running command /root/drillAutomation/framework-master/framework/resources/Datasources/hive_storage/execHive.sh resources/Datasources/hive_storage/windows_functions.ddl
      Exiting due to uncaught exception
      java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error executing the command /root/drillAutomation/framework-master/framework/resources/Datasources/ctas_auto_partition/ctas_existing_partition_pruning.sh has return code 1
      	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
      	at java.util.concurrent.FutureTask.get(FutureTask.java:188)
      	at org.apache.drill.test.framework.CancelingExecutor$1.run(CancelingExecutor.java:81)
      	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      	at java.lang.Thread.run(Thread.java:744)
      Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error executing the command /root/drillAutomation/framework-master/framework/resources/Datasources/ctas_auto_partition/ctas_existing_partition_pruning.sh has return code 1
      	at org.apache.drill.test.framework.CancelingExecutor$1$1.run(CancelingExecutor.java:76)
      	... 5 more
      Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error executing the command /root/drillAutomation/framework-master/framework/resources/Datasources/ctas_auto_partition/ctas_existing_partition_pruning.sh has return code 1
      	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
      	at java.util.concurrent.FutureTask.get(FutureTask.java:202)
      	at org.apache.drill.test.framework.CancelingExecutor$1$1.run(CancelingExecutor.java:72)
      	... 5 more
      Caused by: java.lang.RuntimeException: Error executing the command /root/drillAutomation/framework-master/framework/resources/Datasources/ctas_auto_partition/ctas_existing_partition_pruning.sh has return code 1
      	at org.apache.drill.test.framework.TestDriver.runGenerateScript(TestDriver.java:471)
      	at org.apache.drill.test.framework.TestDriver.access$400(TestDriver.java:46)
      	at org.apache.drill.test.framework.TestDriver$2.run(TestDriver.java:411)
      	... 5 more
      

      drillbit.log

      [root@drillats1 ~]# clush -a grep 295f7e89-0693-e2bb-7ab6-98d75e83e145 /var/log/drill/drillbit.log
      clush: 10.10.100.172: exited with exit code 1
      clush: 10.10.100.173: exited with exit code 1
      10.10.100.171: 2016-01-20 22:57:58,930 [295f7e89-0693-e2bb-7ab6-98d75e83e145:foreman] INFO  o.a.drill.exec.work.foreman.Foreman - Query text for query id 295f7e89-0693-e2bb-7ab6-98d75e83e145: create table `existing_partition_pruning/lineitempart` partition by (dir0) as select * from dfs.`/drill/testdata/partition_pruning/dfs/lineitempart`
      10.10.100.171: 2016-01-20 22:57:59,153 [295f7e89-0693-e2bb-7ab6-98d75e83e145:foreman] INFO  o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 7 out of 7 using 7 threads. Time: 4ms total, 1.736564ms avg, 2ms max.
      10.10.100.171: 2016-01-20 22:57:59,154 [295f7e89-0693-e2bb-7ab6-98d75e83e145:foreman] INFO  o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 7 out of 7 using 7 threads. Earliest start: 1007.185000 μs, Latest start: 2613.030000 μs, Average start: 1986.881857 μs .
      10.10.100.171: 2016-01-20 22:57:59,456 [295f7e89-0693-e2bb-7ab6-98d75e83e145:frag:0:0] INFO  o.a.d.e.w.fragment.FragmentExecutor - 295f7e89-0693-e2bb-7ab6-98d75e83e145:0:0: State change requested AWAITING_ALLOCATION --> RUNNING
      10.10.100.171: 2016-01-20 22:57:59,457 [295f7e89-0693-e2bb-7ab6-98d75e83e145:frag:0:0] INFO  o.a.d.e.w.f.FragmentStatusReporter - 295f7e89-0693-e2bb-7ab6-98d75e83e145:0:0: State to report: RUNNING
      10.10.100.171: 2016-01-20 22:58:03,939 [UserServer-1] INFO  o.a.d.e.w.fragment.FragmentExecutor - 295f7e89-0693-e2bb-7ab6-98d75e83e145:0:0: State change requested RUNNING --> FAILED
      10.10.100.171: 2016-01-20 22:58:03,941 [295f7e89-0693-e2bb-7ab6-98d75e83e145:frag:0:0] INFO  o.a.d.e.w.fragment.FragmentExecutor - 295f7e89-0693-e2bb-7ab6-98d75e83e145:0:0: State change requested FAILED --> FINISHED
      10.10.100.171: 2016-01-20 22:58:03,951 [295f7e89-0693-e2bb-7ab6-98d75e83e145:frag:0:0] ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: ChannelClosedException: Channel closed /10.10.100.171:31010 <--> /10.10.100.171:33713.
      10.10.100.171: 2016-01-20 22:58:03,985 [CONTROL-rpc-event-queue] WARN  o.a.d.e.w.b.ControlMessageHandler - Dropping request to cancel fragment. 295f7e89-0693-e2bb-7ab6-98d75e83e145:0:0 does not exist.
      clush: 10.10.100.174: exited with exit code 1
      
      2016-01-20 22:58:03,939 [UserServer-1] INFO  o.a.d.e.w.fragment.FragmentExecutor - 295f7e89-0693-e2bb-7ab6-98d75e83e145:0:0: State change requested RUNNING --> FAILED
      2016-01-20 22:58:03,941 [295f7e89-0693-e2bb-7ab6-98d75e83e145:frag:0:0] INFO  o.a.d.e.w.fragment.FragmentExecutor - 295f7e89-0693-e2bb-7ab6-98d75e83e145:0:0: State change requested FAILED --> FINISHED
      2016-01-20 22:58:03,951 [295f7e89-0693-e2bb-7ab6-98d75e83e145:frag:0:0] ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: ChannelClosedException: Channel closed /10.10.100.171:31010 <--> /10.10.100.171:33713.
      
      Fragment 0:0
      
      [Error Id: 772d90b8-c5e6-4ecc-8776-68ccc6b57d49 on drillats1.qa.lab:31010]
      org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: ChannelClosedException: Channel closed /10.10.100.171:31010 <--> /10.10.100.171:33713.
      
      Fragment 0:0
      
      [Error Id: 772d90b8-c5e6-4ecc-8776-68ccc6b57d49 on drillats1.qa.lab:31010]
              at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543) ~[drill-common-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
              at org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:321) [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
              at org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:184) [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
              at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:290) [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
              at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_45]
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_45]
              at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
      Caused by: org.apache.drill.exec.rpc.ChannelClosedException: Channel closed /10.10.100.171:31010 <--> /10.10.100.171:33713.
              at org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:173) ~[drill-rpc-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
              at org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:149) ~[drill-rpc-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
              at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) ~[netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) ~[netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) ~[netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406) ~[netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) ~[netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943) ~[netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592) ~[netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584) ~[netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.closeOnRead(AbstractEpollStreamChannel.java:409) ~[netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:na]
              at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:647) ~[netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:na]
              at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollRdHupReady(AbstractEpollStreamChannel.java:573) ~[netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:na]
              at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:315) ~[netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:na]
              at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:250) ~[netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:na]
              at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) ~[netty-common-4.0.27.Final.jar:4.0.27.Final]
              ... 1 common frames omitted
      2016-01-20 22:58:03,984 [CONTROL-rpc-event-queue] WARN  o.a.drill.exec.work.foreman.Foreman - Dropping request to move to COMPLETED state as query is already at FAILED state (which is terminal).
      2016-01-20 22:58:03,985 [CONTROL-rpc-event-queue] WARN  o.a.d.e.w.b.ControlMessageHandler - Dropping request to cancel fragment. 295f7e89-0693-e2bb-7ab6-98d75e83e145:0:0 does not exist.
      2016-01-20 22:58:04,537 [295f7e85-eb64-6e07-a643-e40903e7b97e:frag:0:0] INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - Merging and spilling to /tmp/drill/spill/295f7e85-eb64-6e07-a643-e40903e7b97e/major_fragment_0/minor_fragment_0/operator_6/0
      2016-01-20 22:58:04,715 [295f7e85-eb64-6e07-a643-e40903e7b97e:frag:0:0] INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - Completed spilling to /tmp/drill/spill/295f7e85-eb64-6e07-a643-e40903e7b97e/major_fragment_0/minor_fragment_0/operator_6/0
      2016-01-20 22:58:04,769 [295f7e85-eb64-6e07-a643-e40903e7b97e:frag:0:0] INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - Merging and spilling to /tmp/drill/spill/295f7e85-eb64-6e07-a643-e40903e7b97e/major_fragment_0/minor_fragment_0/operator_6/1
      2016-01-20 22:58:04,822 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.fragment.FragmentExecutor - 295f7e8b-e1a0-3402-bfa4-d401b8758d0b:0:0: State change requested RUNNING --> CANCELLATION_REQUESTED
      2016-01-20 22:58:04,823 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.f.FragmentStatusReporter - 295f7e8b-e1a0-3402-bfa4-d401b8758d0b:0:0: State to report: CANCELLATION_REQUESTED
      2016-01-20 22:58:04,875 [295f7e85-eb64-6e07-a643-e40903e7b97e:frag:0:0] INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - Completed spilling to /tmp/drill/spill/295f7e85-eb64-6e07-a643-e40903e7b97e/major_fragment_0/minor_fragment_0/operator_6/1
      2016-01-20 22:58:04,901 [295f7e85-eb64-6e07-a643-e40903e7b97e:frag:0:0] INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - Merging and spilling to /tmp/drill/spill/295f7e85-eb64-6e07-a643-e40903e7b97e/major_fragment_0/minor_fragment_0/operator_6/2
      2016-01-20 22:58:04,952 [295f7e85-eb64-6e07-a643-e40903e7b97e:frag:0:0] INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - Completed spilling to /tmp/drill/spill/295f7e85-eb64-6e07-a643-e40903e7b97e/major_fragment_0/minor_fragment_0/operator_6/2
      2016-01-20 22:58:04,953 [295f7e85-eb64-6e07-a643-e40903e7b97e:frag:0:0] WARN  o.a.d.e.p.i.xsort.ExternalSortBatch - Starting to merge. 6 batch groups. Current allocated memory: 53696128
      2016-01-20 22:58:05,005 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.fragment.FragmentExecutor - 295f7e85-eb64-6e07-a643-e40903e7b97e:0:0: State change requested RUNNING --> CANCELLATION_REQUESTED
      2016-01-20 22:58:05,006 [CONTROL-rpc-event-queue] INFO  o.a.d.e.w.f.FragmentStatusReporter - 295f7e85-eb64-6e07-a643-e40903e7b97e:0:0: State to report: CANCELLATION_REQUESTED
      2016-01-20 22:58:05,043 [295f7e85-eb64-6e07-a643-e40903e7b97e:frag:0:0] INFO  o.a.d.e.w.fragment.FragmentExecutor - 295f7e85-eb64-6e07-a643-e40903e7b97e:0:0: State change requested CANCELLATION_REQUESTED --> FINISHED
      2016-01-20 22:58:05,043 [295f7e85-eb64-6e07-a643-e40903e7b97e:frag:0:0] INFO  o.a.d.e.w.f.FragmentStatusReporter - 295f7e85-eb64-6e07-a643-e40903e7b97e:0:0: State to report: CANCELLED
      2016-01-20 22:58:05,070 [UserServer-1] INFO  o.a.drill.exec.work.foreman.Foreman - Failure while trying communicate query result to initiating client. This would happen if a client is disconnected before response notice can be sent.
      org.apache.drill.exec.rpc.ChannelClosedException: null
              at org.apache.drill.exec.rpc.CoordinationQueue$RpcListener.operationComplete(CoordinationQueue.java:89) [drill-rpc-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
              at org.apache.drill.exec.rpc.CoordinationQueue$RpcListener.operationComplete(CoordinationQueue.java:67) [drill-rpc-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
              at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) [netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) [netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) [netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424) [netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:788) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:689) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1114) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:705) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:980) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1032) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:965) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) [netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:254) [netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:na]
              at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) [netty-common-4.0.27.Final.jar:4.0.27.Final]
              at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
      2016-01-20 22:58:05,071 [UserServer-1] WARN  o.a.drill.exec.work.foreman.Foreman - Dropping request to move to FAILED state as query is already at CANCELED state (which is terminal).
      2016-01-20 22:58:05,279 [295f7e8b-e1a0-3402-bfa4-d401b8758d0b:frag:0:0] INFO  o.a.d.e.w.fragment.FragmentExecutor - 295f7e8b-e1a0-3402-bfa4-d401b8758d0b:0:0: State change requested CANCELLATION_REQUESTED --> FINISHED
      2016-01-20 22:58:05,280 [295f7e8b-e1a0-3402-bfa4-d401b8758d0b:frag:0:0] INFO  o.a.d.e.w.f.FragmentStatusReporter - 295f7e8b-e1a0-3402-bfa4-d401b8758d0b:0:0: State to report: CANCELLED
      2016-01-20 22:58:05,308 [UserServer-1] INFO  o.a.drill.exec.work.foreman.Foreman - Failure while trying communicate query result to initiating client. This would happen if a client is disconnected before response notice can be sent.
      org.apache.drill.exec.rpc.ChannelClosedException: null
              at org.apache.drill.exec.rpc.CoordinationQueue$RpcListener.operationComplete(CoordinationQueue.java:89) [drill-rpc-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
              at org.apache.drill.exec.rpc.CoordinationQueue$RpcListener.operationComplete(CoordinationQueue.java:67) [drill-rpc-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
              at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) [netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) [netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) [netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424) [netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:788) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:689) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1114) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:705) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:980) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1032) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:965) [netty-transport-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) [netty-common-4.0.27.Final.jar:4.0.27.Final]
              at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:254) [netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:na]
              at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) [netty-common-4.0.27.Final.jar:4.0.27.Final]
              at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
      2016-01-20 22:58:05,308 [UserServer-1] WARN  o.a.drill.exec.work.foreman.Foreman - Dropping request to move to FAILED state as query is already at CANCELED state (which is terminal).
      

      Attachments

        Activity

          People

            adeneche Abdel Hakim Deneche
            cchang@maprtech.com Chun Chang
            Chun Chang Chun Chang
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: