Uploaded image for project: 'Apache Drill'
  1. Apache Drill
  2. DRILL-7953

Query failed with (Too many open files)

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Not A Bug
    • 1.15.0
    • None
    • Server
    • None

    Description

      Hi Support,

       

      When we query a complex view that will access a lot of parquet files, the query failed with error,  

      Caused by: org.apache.drill.common.exceptions.ExecutionSetupException: Error opening or reading metadata for parquet file at location: part-00006-df5fe7db-6086-43a3-9575-1b18c140b5e6-c000.snappy.parquet
      at org.apache.drill.exec.store.parquet.columnreaders.PageReader.<init>(PageReader.java:151) ~[drill-java-exec-1.15.0.jar:1.15.0]
      at org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.<init>(AsyncPageReader.java:97) ~[drill-java-exec-1.15.0.jar:1.15.0]
      at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.<init>(ColumnReader.java:100) ~[drill-java-exec-1.15.0.jar:1.15.0]
      at org.apache.drill.exec.store.parquet.columnreaders.NullableColumnReader.<init>(NullableColumnReader.java:43) ~[drill-java-exec-1.15.0.jar:1.15.0]
      at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableFixedByteAlignedReader.<init>(NullableFixedByteAlignedReaders.java:54) ~[drill-java-exec-1.15.0.jar:1.15.0]
      at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableConvertedReader.<init>(NullableFixedByteAlignedReaders.java:328) ~[drill-java-exec-1.15.0.jar:1.15.0]
      at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableDateReader.<init>(NullableFixedByteAlignedReaders.java:348) ~[drill-java-exec-1.15.0.jar:1.15.0]
      at org.apache.drill.exec.store.parquet.columnreaders.ColumnReaderFactory.createFixedColumnReader(ColumnReaderFactory.java:185) ~[drill-java-exec-1.15.0.jar:1.15.0]
      at org.apache.drill.exec.store.parquet.columnreaders.ParquetColumnMetadata.makeFixedWidthReader(ParquetColumnMetadata.java:141) ~[drill-java-exec-1.15.0.jar:1.15.0]
      at org.apache.drill.exec.store.parquet.columnreaders.ReadState.buildReader(ReadState.java:123) ~[drill-java-exec-1.15.0.jar:1.15.0]
      at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.setup(ParquetRecordReader.java:253) ~[drill-java-exec-1.15.0.jar:1.15.0]
      ... 29 common frames omitted
      Caused by: java.io.FileNotFoundException: /data/testing/CH/part-00006-df5fe7db-6086-43a3-9575-1b18c140b5e6-c000.snappy.parquet (Too many open files)
      at java.io.FileInputStream.open0(Native Method) ~[na:1.8.0_181]
      at java.io.FileInputStream.open(FileInputStream.java:195) ~[na:1.8.0_181]
      at java.io.FileInputStream.<init>(FileInputStream.java:138) ~[na:1.8.0_181]
      at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.<init>(RawLocalFileSystem.java:106) ~[hadoop-common-2.7.4.jar:na]
      at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) ~[hadoop-common-2.7.4.jar:na]
      at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:143) ~[hadoop-common-2.7.4.jar:na]
      at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) ~[hadoop-common-2.7.4.jar:na]
      at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) ~[hadoop-common-2.7.4.jar:na]
      at org.apache.drill.exec.store.dfs.DrillFileSystem.open(DrillFileSystem.java:151) ~[drill-java-exec-1.15.0.jar:1.15.0]
      at org.apache.drill.exec.store.parquet.columnreaders.PageReader.<init>(PageReader.java:133) ~[drill-java-exec-1.15.0.jar:1.15.0]
      ... 39 common frames omitted

       

      We add below in the /etc/security/limits.conf, but Drill still uses the default setting 1024 when startup. 

      • hard nofile 65536
      • soft nofile 65536

       

      Fri Jun 18 19:40:33 AEST 2021 Starting drillbit on drill-testing
      core file size (blocks, -c) 0
      data seg size (kbytes, -d) unlimited
      scheduling priority (-e) 0
      file size (blocks, -f) unlimited
      pending signals (-i) 740731
      max locked memory (kbytes, -l) 64
      max memory size (kbytes, -m) unlimited
      open files (-n) 1024
      pipe size (512 bytes, -p) 8
      POSIX message queues (bytes, -q) 819200
      real-time priority (-r) 0
      stack size (kbytes, -s) 8192
      cpu time (seconds, -t) unlimited
      max user processes (-u) 740731
      virtual memory (kbytes, -v) unlimited
      file locks (-x) unlimited

       

      is there some place we can set this parameter?

       

       

       

      Attachments

        Activity

          People

            Unassigned Unassigned
            dony.dong Dony Dong
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: