Details
Description
Joining two 1G CSV tables resulting in below error:
> select a.* from dfs.root.`user/hive/warehouse/passwords_csv_big` a, dfs.root.`user/hive/warehouse/passwords_csv_big` b . . . . . . . . . . . . . . . . . . . . . . .> where a.columns[1]=b.columns[1] limit 5; +------------+ | columns | +------------+ | ["1","787148","92921","158596","17776","896094","2"] | | ["1","787148","10930","348699","534058","778852","2"] | | ["1","787148","10930","348699","534058","778852","2"] | | ["1","787148","10930","348699","534058","778852","2"] | | ["1","787148","10930","348699","534058","778852","2"] | java.lang.RuntimeException: java.sql.SQLException: SYSTEM ERROR: org.apache.drill.exec.rpc.RpcException: Data not accepted downstream. Fragment 5:15 [Error Id: dd25cee9-1d1d-4658-9a83-cdefcafb7031 on h3.poc.com:31010] (org.apache.drill.exec.rpc.RpcException) Data not accepted downstream. org.apache.drill.exec.ops.StatusHandler.success():54 org.apache.drill.exec.ops.StatusHandler.success():29 org.apache.drill.exec.rpc.ListeningCommand$DeferredRpcOutcome.success():55 org.apache.drill.exec.rpc.ListeningCommand$DeferredRpcOutcome.success():46 org.apache.drill.exec.rpc.data.DataTunnel$ThrottlingOutcomeListener.success():133 org.apache.drill.exec.rpc.data.DataTunnel$ThrottlingOutcomeListener.success():116 org.apache.drill.exec.rpc.CoordinationQueue$RpcListener.set():98 org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode():243 org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode():188 io.netty.handler.codec.MessageToMessageDecoder.channelRead():89 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():339 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():324 io.netty.handler.timeout.IdleStateHandler.channelRead():254 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():339 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():324 io.netty.handler.codec.MessageToMessageDecoder.channelRead():103 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():339 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():324 io.netty.handler.codec.ByteToMessageDecoder.channelRead():242 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():339 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():324 io.netty.channel.ChannelInboundHandlerAdapter.channelRead():86 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():339 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():324 io.netty.channel.DefaultChannelPipeline.fireChannelRead():847 io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady():618 io.netty.channel.epoll.EpollEventLoop.processReady():329 io.netty.channel.epoll.EpollEventLoop.run():250 io.netty.util.concurrent.SingleThreadEventExecutor$2.run():111 java.lang.Thread.run():745 at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514) at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148) at sqlline.SqlLine.print(SqlLine.java:1809) at sqlline.SqlLine$Commands.execute(SqlLine.java:3766) at sqlline.SqlLine$Commands.sql(SqlLine.java:3663) at sqlline.SqlLine.dispatch(SqlLine.java:889) at sqlline.SqlLine.begin(SqlLine.java:763) at sqlline.SqlLine.start(SqlLine.java:498) at sqlline.SqlLine.main(SqlLine.java:460)
It can be workarounded by changing drill.exec.buffer.size.
My understanding is "drill.exec.buffer.size" can only change the performance, but it should not cause SQL to fail,right?