Uploaded image for project: 'Apache Drill'
  1. Apache Drill
  2. DRILL-6623

Drill encounters exception IndexOutOfBoundsException: writerIndex: -8373248 (expected: readerIndex(0) <= writerIndex <= capacity(32768))

    XMLWordPrintableJSON

Details

    Description

      This is the query:
      alter session set `planner.width.max_per_node` = 1;
      alter session set `planner.width.max_per_query` = 1;
      select * from (
      select
      split_part(CharacterValuea, '8', 1) CharacterValuea,
      split_part(CharacterValueb, '8', 1) CharacterValueb,
      split_part(CharacterValuec, '8', 2) CharacterValuec,
      split_part(CharacterValued, '8', 3) CharacterValued,
      split_part(CharacterValuee, 'b', 1) CharacterValuee
      from (select * from dfs.`/drill/testdata/batch_memory/character5_1MB_1GB.parquet` order by CharacterValuea) d where d.CharacterValuea = '1234567890123110');

      The query works with a smaller table.

      This is the stack trace:

      2018-07-19 16:59:48,803 [24aedae9-d1f3-8e12-2e1f-0479915c61b1:frag:0:0] ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: IndexOutOfBoundsException: writerIndex: -8373248 (expected: readerIndex(0) <= writerIndex <= capacity(32768))
      
      Fragment 0:0
      
      [Error Id: edc75560-41ca-4fdd-907f-060be1795786 on qa-node186.qa.lab:31010]
      org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: IndexOutOfBoundsException: writerIndex: -8373248 (expected: readerIndex(0) <= writerIndex <= capacity(32768))
      
      Fragment 0:0
      
      [Error Id: edc75560-41ca-4fdd-907f-060be1795786 on qa-node186.qa.lab:31010]
      	at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633) ~[drill-common-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:361) [drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:216) [drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:327) [drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_161]
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_161]
      	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_161]
      Caused by: java.lang.IndexOutOfBoundsException: writerIndex: -8373248 (expected: readerIndex(0) <= writerIndex <= capacity(32768))
      	at io.netty.buffer.AbstractByteBuf.writerIndex(AbstractByteBuf.java:104) ~[netty-buffer-4.0.48.Final.jar:4.0.48.Final]
      	at org.apache.drill.exec.vector.VarCharVector$Mutator.setValueCount(VarCharVector.java:810) ~[vector-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.vector.NullableVarCharVector$Mutator.setValueCount(NullableVarCharVector.java:641) ~[vector-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setValueCount(ProjectRecordBatch.java:329) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.doWork(ProjectRecordBatch.java:242) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:117) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:142) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:172) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:63) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:142) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:172) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:103) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:83) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:93) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:294) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:281) ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_161]
      	at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_161]
      	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595) ~[hadoop-common-2.7.0-mapr-1707.jar:na]
      	at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:281) [drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
      	... 4 common frames omitted
      

      This is the explain plan:

      | 00-00    Screen : rowType = RecordType(ANY CharacterValuea, ANY CharacterValueb, ANY CharacterValuec, ANY CharacterValued, ANY CharacterValuee): rowcount = 9216000.0, cumulative cost = {5.815296E8 rows, 8.786270178575306E9 cpu, 0.0 io, 1.00663296E12 network, 9.8304E8 memory}, id = 3374
      00-01      ProjectAllowDup(CharacterValuea=[$0], CharacterValueb=[$1], CharacterValuec=[$2], CharacterValued=[$3], CharacterValuee=[$4]) : rowType = RecordType(ANY CharacterValuea, ANY CharacterValueb, ANY CharacterValuec, ANY CharacterValued, ANY CharacterValuee): rowcount = 9216000.0, cumulative cost = {5.80608E8 rows, 8.785348578575306E9 cpu, 0.0 io, 1.00663296E12 network, 9.8304E8 memory}, id = 3373
      00-02        Project(CharacterValuea=[SPLIT_PART(ITEM($0, 'CharacterValuea'), '8', 1)], CharacterValueb=[SPLIT_PART(ITEM($0, 'CharacterValueb'), '8', 1)], CharacterValuec=[SPLIT_PART(ITEM($0, 'CharacterValuec'), '8', 2)], CharacterValued=[SPLIT_PART(ITEM($0, 'CharacterValued'), '8', 3)], CharacterValuee=[SPLIT_PART(ITEM($0, 'CharacterValuee'), 'b', 1)]) : rowType = RecordType(ANY CharacterValuea, ANY CharacterValueb, ANY CharacterValuec, ANY CharacterValued, ANY CharacterValuee): rowcount = 9216000.0, cumulative cost = {5.71392E8 rows, 8.739268578575306E9 cpu, 0.0 io, 1.00663296E12 network, 9.8304E8 memory}, id = 3372
      00-03          SelectionVectorRemover : rowType = RecordType(DYNAMIC_STAR T3¦¦**): rowcount = 9216000.0, cumulative cost = {5.62176E8 rows, 8.554948578575305E9 cpu, 0.0 io, 1.00663296E12 network, 9.8304E8 memory}, id = 3371
      00-04            Filter(condition=[=(ITEM($0, 'CharacterValuea'), '1234567890123110')]) : rowType = RecordType(DYNAMIC_STAR T3¦¦**): rowcount = 9216000.0, cumulative cost = {5.5296E8 rows, 8.545732578575305E9 cpu, 0.0 io, 1.00663296E12 network, 9.8304E8 memory}, id = 3370
      00-05              Project(T3¦¦**=[$0]) : rowType = RecordType(DYNAMIC_STAR T3¦¦**): rowcount = 6.144E7, cumulative cost = {4.9152E8 rows, 8.263108578575305E9 cpu, 0.0 io, 1.00663296E12 network, 9.8304E8 memory}, id = 3369
      00-06                SingleMergeExchange(sort0=[1]) : rowType = RecordType(DYNAMIC_STAR T3¦¦**, ANY CharacterValuea): rowcount = 6.144E7, cumulative cost = {4.3008E8 rows, 8.201668578575305E9 cpu, 0.0 io, 1.00663296E12 network, 9.8304E8 memory}, id = 3368
      01-01                  OrderedMuxExchange(sort0=[1]) : rowType = RecordType(DYNAMIC_STAR T3¦¦**, ANY CharacterValuea): rowcount = 6.144E7, cumulative cost = {3.6864E8 rows, 7.710148578575305E9 cpu, 0.0 io, 5.0331648E11 network, 9.8304E8 memory}, id = 3367
      02-01                    SelectionVectorRemover : rowType = RecordType(DYNAMIC_STAR T3¦¦**, ANY CharacterValuea): rowcount = 6.144E7, cumulative cost = {3.072E8 rows, 7.648708578575305E9 cpu, 0.0 io, 5.0331648E11 network, 9.8304E8 memory}, id = 3366
      02-02                      Sort(sort0=[$1], dir0=[ASC]) : rowType = RecordType(DYNAMIC_STAR T3¦¦**, ANY CharacterValuea): rowcount = 6.144E7, cumulative cost = {2.4576E8 rows, 7.587268578575305E9 cpu, 0.0 io, 5.0331648E11 network, 9.8304E8 memory}, id = 3365
      02-03                        HashToRandomExchange(dist0=[[$1]]) : rowType = RecordType(DYNAMIC_STAR T3¦¦**, ANY CharacterValuea): rowcount = 6.144E7, cumulative cost = {1.8432E8 rows, 1.2288E9 cpu, 0.0 io, 5.0331648E11 network, 0.0 memory}, id = 3364
      03-01                          Project(T3¦¦**=[$0], CharacterValuea=[$1]) : rowType = RecordType(DYNAMIC_STAR T3¦¦**, ANY CharacterValuea): rowcount = 6.144E7, cumulative cost = {1.2288E8 rows, 2.4576E8 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 3363
      03-02                            Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath [path=maprfs:///drill/testdata/batch_memory/character5_1MB_1GB.parquet]], selectionRoot=maprfs:/drill/testdata/batch_memory/character5_1MB_1GB.parquet, numFiles=1, numRowGroups=25, usedMetadataFile=false, columns=[`**`]]]) : rowType = RecordType(DYNAMIC_STAR **, ANY CharacterValuea): rowcount = 6.144E7, cumulative cost = {6.144E7 rows, 1.2288E8 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 3362
      

      The table can be found in /home/MAPRTECH/qa/rhou/drill6623.
      I have attached the profile and the drillbit.log.

      This was encountered on the Apache Drill release with the latest code in July 19. This is the commit id:

      1.14.0-SNAPSHOT 85344abd1ddb73448bdf67cdc6883cb98795a910 DRILL-6614: Allow usage of MapRDBFormatPlugin for HiveStoragePlugin 19.07.2018 @ 10:39:36 PDT rhou@mapr.com 19.07.2018 @ 15:44:52 PDT

      character410.q

      Attachments

        1. drillbit.log.61b1
          11 kB
          Robert Hou
        2. 24aedae9-d1f3-8e12-2e1f-0479915c61b1.sys.drill
          11 kB
          Robert Hou

        Activity

          People

            karthikm Karthikeyan Manivannan
            rhou Robert Hou
            Votes:
            1 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: