Uploaded image for project: 'Apache Drill'
  1. Apache Drill
  2. DRILL-3684

CTAS : Memory Leak when using CTAS with tpch sf100

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 1.2.0
    • Storage - Parquet
    • None

    Description

      git.commit.id.abbrev=55dfd0e

      Below is the sequence of operations which resulted in the issue. The input to the CTAS is the lineitem table from Tpch SF100 data set

      ALTER SESSION SET `store.parquet.block-size` = 1030000;
      create table lineitem as select * from dfs.`/drill/testdata/tpch100/lineitem`;
      java.sql.SQLException: SYSTEM ERROR: IllegalStateException: Attempted to close accountor with 2 buffer(s) still allocatedfor QueryId: 2a28b5d1-08f4-cdc6-0287-d8714b8ab713, MajorFragmentId: 1, MinorFragmentId: 10.
      
      
              Total 2 allocation(s) of byte size(s): 65536, 65536, at stack location:
                      org.apache.drill.exec.memory.TopLevelAllocator$ChildAllocator.buffer(TopLevelAllocator.java:266)
                      org.apache.drill.exec.store.parquet.ParquetDirectByteBufferAllocator.allocate(ParquetDirectByteBufferAllocator.java:51)
                      parquet.bytes.CapacityByteArrayOutputStream.allocateSlab(CapacityByteArrayOutputStream.java:69)
                      parquet.bytes.CapacityByteArrayOutputStream.initSlabs(CapacityByteArrayOutputStream.java:85)
                      parquet.bytes.CapacityByteArrayOutputStream.<init>(CapacityByteArrayOutputStream.java:64)
                      parquet.column.values.rle.RunLengthBitPackingHybridEncoder.<init>(RunLengthBitPackingHybridEncoder.java:132)
                      parquet.column.values.rle.RunLengthBitPackingHybridValuesWriter.<init>(RunLengthBitPackingHybridValuesWriter.java:41)
                      parquet.column.ParquetProperties.getColumnDescriptorValuesWriter(ParquetProperties.java:96)
                      parquet.column.impl.ColumnWriterV1.<init>(ColumnWriterV1.java:76)
                      parquet.column.impl.ColumnWriteStoreV1.newMemColumn(ColumnWriteStoreV1.java:70)
                      parquet.column.impl.ColumnWriteStoreV1.getColumnWriter(ColumnWriteStoreV1.java:58)
                      parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.<init>(MessageColumnIO.java:183)
                      parquet.io.MessageColumnIO.getRecordWriter(MessageColumnIO.java:375)
                      org.apache.drill.exec.store.parquet.ParquetRecordWriter.newSchema(ParquetRecordWriter.java:193)
                      org.apache.drill.exec.store.parquet.ParquetRecordWriter.checkBlockSizeReached(ParquetRecordWriter.java:267)
                      org.apache.drill.exec.store.parquet.ParquetRecordWriter.endRecord(ParquetRecordWriter.java:361)
                      org.apache.drill.exec.store.EventBasedRecordWriter.write(EventBasedRecordWriter.java:64)
                      org.apache.drill.exec.physical.impl.WriterRecordBatch.innerNext(WriterRecordBatch.java:106)
                      org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:147)
                      org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
                      org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:83)
                      org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:95)
                      org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:73)
                      org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:258)
                      org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:252)
                      java.security.AccessController.doPrivileged(Native Method)
                      javax.security.auth.Subject.doAs(Subject.java:415)
                      org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1566)
                      org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:252)
                      org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
                      java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
                      java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
                      java.lang.Thread.run(Thread.java:745)
      
      
      Fragment 1:10
      
      [Error Id: 6198bce1-d1fd-46b9-b211-2c7547f489c0 on qa-node191.qa.lab:31010]
      Aborting command set because "force" is false and command failed: "create table lineitem as select * from dfs.`/drill/testdata/tpch100/lineitem`;"
      

      Attached the log file required. The data is too large to upload it here

      Attachments

        1. correct_error.log
          14 kB
          Rahul Kumar Challapalli

        Issue Links

          Activity

            People

              adeneche Abdel Hakim Deneche
              rkins Rahul Kumar Challapalli
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: