Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
Description
It seems to happen when we have join in one of the legs of the union all:
Both legs have inner join:
0: jdbc:drill:schema=dfs> select t1.*, t2.* from t1, t2 where t1.c1 = t2.c2 union all select t1.*, t3.* from t1, t3 where t1.c1 = t3.c3; +------------+------------+------------+------------+------------+------------+ | a1 | b1 | c1 | a2 | b2 | c2 | +------------+------------+------------+------------+------------+------------+ | 1 | aaaaa | 2015-01-01 | 1 | aaaaa | 2015-01-01 | | 2 | bbbbb | 2015-01-02 | 2 | bbbbb | 2015-01-02 | | 2 | bbbbb | 2015-01-02 | 2 | bbbbb | 2015-01-02 | | 2 | bbbbb | 2015-01-02 | 2 | bbbbb | 2015-01-02 | | 3 | ccccc | 2015-01-03 | 3 | ccccc | 2015-01-03 | | 4 | null | 2015-01-04 | 4 | ddddd | 2015-01-04 | | 5 | eeeee | 2015-01-05 | 5 | eeeee | 2015-01-05 | | 6 | fffff | 2015-01-06 | 6 | fffff | 2015-01-06 | | 7 | ggggg | 2015-01-07 | 7 | ggggg | 2015-01-07 | | 7 | ggggg | 2015-01-07 | 7 | ggggg | 2015-01-07 | | null | hhhhh | 2015-01-08 | 8 | hhhhh | 2015-01-08 | java.lang.IndexOutOfBoundsException: index: 0, length: 1 (expected: range(0, 0)) at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:156) at io.netty.buffer.DrillBuf.chk(DrillBuf.java:178) at io.netty.buffer.DrillBuf.getByte(DrillBuf.java:673) at org.apache.drill.exec.vector.UInt1Vector$Accessor.get(UInt1Vector.java:309) at org.apache.drill.exec.vector.NullableIntVector$Accessor.isSet(NullableIntVector.java:342) at org.apache.drill.exec.vector.NullableIntVector$Accessor.isNull(NullableIntVector.java:338) at org.apache.drill.exec.vector.NullableIntVector$Accessor.getObject(NullableIntVector.java:359) at org.apache.drill.exec.vector.accessor.NullableIntAccessor.getObject(NullableIntAccessor.java:98) at org.apache.drill.jdbc.AvaticaDrillSqlAccessor.getObject(AvaticaDrillSqlAccessor.java:136) at net.hydromatic.avatica.AvaticaResultSet.getObject(AvaticaResultSet.java:351) at sqlline.SqlLine$Rows$Row.<init>(SqlLine.java:2388) at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2504) at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148) at sqlline.SqlLine.print(SqlLine.java:1809) at sqlline.SqlLine$Commands.execute(SqlLine.java:3766) at sqlline.SqlLine$Commands.sql(SqlLine.java:3663) at sqlline.SqlLine.dispatch(SqlLine.java:889) at sqlline.SqlLine.begin(SqlLine.java:763) at sqlline.SqlLine.start(SqlLine.java:498) at sqlline.SqlLine.main(SqlLine.java:460)
One leg has inner join:
0: jdbc:drill:schema=dfs> select t1.*, t2.* from t1, t2 where t1.c1 = t2.c2 union all select t3.*, t3.* from t3; +------------+------------+------------+------------+------------+------------+ | a1 | b1 | c1 | a2 | b2 | c2 | +------------+------------+------------+------------+------------+------------+ | 1 | aaaaa | 2015-01-01 | 1 | aaaaa | 2015-01-01 | | 2 | bbbbb | 2015-01-02 | 2 | bbbbb | 2015-01-02 | | 2 | bbbbb | 2015-01-02 | 2 | bbbbb | 2015-01-02 | | 2 | bbbbb | 2015-01-02 | 2 | bbbbb | 2015-01-02 | | 3 | ccccc | 2015-01-03 | 3 | ccccc | 2015-01-03 | | 4 | null | 2015-01-04 | 4 | ddddd | 2015-01-04 | | 5 | eeeee | 2015-01-05 | 5 | eeeee | 2015-01-05 | | 6 | fffff | 2015-01-06 | 6 | fffff | 2015-01-06 | | 7 | ggggg | 2015-01-07 | 7 | ggggg | 2015-01-07 | | 7 | ggggg | 2015-01-07 | 7 | ggggg | 2015-01-07 | | null | hhhhh | 2015-01-08 | 8 | hhhhh | 2015-01-08 | java.lang.IndexOutOfBoundsException: index: 0, length: 1 (expected: range(0, 0)) at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:156) at io.netty.buffer.DrillBuf.chk(DrillBuf.java:178) at io.netty.buffer.DrillBuf.getByte(DrillBuf.java:673) at org.apache.drill.exec.vector.UInt1Vector$Accessor.get(UInt1Vector.java:309) at org.apache.drill.exec.vector.NullableIntVector$Accessor.isSet(NullableIntVector.java:342) at org.apache.drill.exec.vector.NullableIntVector$Accessor.isNull(NullableIntVector.java:338) at org.apache.drill.exec.vector.NullableIntVector$Accessor.getObject(NullableIntVector.java:359) at org.apache.drill.exec.vector.accessor.NullableIntAccessor.getObject(NullableIntAccessor.java:98) at org.apache.drill.jdbc.AvaticaDrillSqlAccessor.getObject(AvaticaDrillSqlAccessor.java:136) at net.hydromatic.avatica.AvaticaResultSet.getObject(AvaticaResultSet.java:351) at sqlline.SqlLine$Rows$Row.<init>(SqlLine.java:2388) at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2504) at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148) at sqlline.SqlLine.print(SqlLine.java:1809) at sqlline.SqlLine$Commands.execute(SqlLine.java:3766) at sqlline.SqlLine$Commands.sql(SqlLine.java:3663) at sqlline.SqlLine.dispatch(SqlLine.java:889) at sqlline.SqlLine.begin(SqlLine.java:763) at sqlline.SqlLine.start(SqlLine.java:498) at sqlline.SqlLine.main(SqlLine.java:460)
Query plan:
00-01 ProjectAllowDup(*=[$0], *0=[$1]) 00-02 UnionAll(all=[true]) 00-04 Project(T29¦¦*=[$0], T30¦¦*=[$2]) 00-06 HashJoin(condition=[=($1, $3)], joinType=[inner]) 00-09 Project(T29¦¦*=[$0], c1=[$1]) 00-11 Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath [path=maprfs:/aggregation/sanity/t1]], selectionRoot=/aggregation/sanity/t1, numFiles=1, columns=[`*`]]]) 00-08 Project(T30¦¦*=[$0], c2=[$1]) 00-10 Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath [path=maprfs:/aggregation/sanity/t2]], selectionRoot=/aggregation/sanity/t2, numFiles=1, columns=[`*`]]]) 00-03 Project(T31¦¦*=[$0], T31¦¦*0=[$0]) 00-05 Project(T31¦¦*=[$0]) 00-07 Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath [path=maprfs:/aggregation/sanity/t3]], selectionRoot=/aggregation/sanity/t3, numFiles=1, columns=[`*`]]])
This is how tables were created:
create table t1(a1, b1, c1) as select case when columns[0] = '' then cast(null as integer) else cast(columns[0] as integer) end, case when columns[1] = '' then cast(null as varchar(10)) else cast(columns[1] as varchar(10)) end, case when columns[2] = '' then cast(null as date) else cast(columns[2] as date) end from `t1.csv`; create table t2(a2, b2, c2) as select case when columns[0] = '' then cast(null as integer) else cast(columns[0] as integer) end, case when columns[1] = '' then cast(null as varchar(10)) else cast(columns[1] as varchar(10)) end, case when columns[2] = '' then cast(null as date) else cast(columns[2] as date) end from `t2.csv`; create table t3(a3, b3, c3) as select case when columns[0] = '' then cast(null as integer) else cast(columns[0] as integer) end, case when columns[1] = '' then cast(null as varchar(10)) else cast(columns[1] as varchar(10)) end, case when columns[2] = '' then cast(null as date) else cast(columns[2] as date) end from `t3.csv`;
Tables data types are compatible, but names are different.
Attachments
Attachments
Issue Links
- is related to
-
DRILL-2414 Union-All on SELECT * FROM schema-less data source will throw exception
- Resolved