Uploaded image for project: 'Apache Drill'
  1. Apache Drill
  2. DRILL-5379

Set Hdfs Block Size based on Parquet Block Size

    XMLWordPrintableJSON

Details

    Description

      It seems there a way to force Drill to store CTAS generated parquet file as a single block when using HDFS. Java HDFS API allows to do that, files could be created with the Parquet block-size set in a session or system config.

      Since it is ideal to have single parquet file per hdfs block.

      Here is the HDFS API that allow to do that:
      http://archive.cloudera.com/cdh4/cdh/4/hadoop/api/org/apache/hadoop/fs/FileSystem.html#create(org.apache.hadoop.fs.Path,%20boolean,%20int,%20short,%20long)

      http://archive.cloudera.com/cdh4/cdh/4/hadoop/api/org/apache/hadoop/fs/FileSystem.html#create(org.apache.hadoop.fs.Path,%20boolean,%20int,%20short,%20long)

      Drill uses the hadoop ParquetFileWriter (https://github.com/Parquet/parquet-mr/blob/master/parquet-hadoop/src/main/java/parquet/hadoop/ParquetFileWriter.java).
      This is where the file creation occurs so it might be tricky.

      However, ParquetRecordWriter.java (https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetRecordWriter.java) in Drill creates the ParquetFileWriter with an hadoop configuration object.

      something to explore: Could the block size be set as a property within the Configuration object before passing it to ParquetFileWriter constructor?

      Attachments

        Issue Links

          Activity

            People

              ppenumarthy Padma Penumarthy
              fmethot F Méthot
              Khurram Faraaz Khurram Faraaz
              Votes:
              1 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: