Uploaded image for project: 'Sqoop'
  1. Sqoop
  2. SQOOP-1366 Propose to add Parquet support
  3. SQOOP-1390

Import data to HDFS as a set of Parquet files

    XMLWordPrintableJSON

    Details

    • Type: Sub-task
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 1.4.6
    • Component/s: tools
    • Labels:
      None

      Description

      Parquet files keep data in contiguous chunks by column, appending new records to a dataset requires rewriting substantial portions of existing a file or buffering records to create a new file.

      The JIRA proposes to add the possibility to import an individual table from a RDBMS into HDFS as a set of Parquet files. We will also provide a command-line interface with a new argument --as-parquetfile
      Example invocation:
      sqoop import --connect JDBC_URI --table TABLE --as-parquetfile --target-dir /path/to/files

      The major items are listed as follows:

      • Implement ParquetImportMapper
      • Hook up the ParquetOutputFormat and ParquetImportMapper in the import job.
      • Be able to support import from scratch or in append mode

      Note that as Parquet is a columnar storage format, it doesn't make sense to write to it directly from record-based tools. So we'd consider to use Kite SDK to simplify the handling of Parquet specific things.

        Attachments

        1. SQOOP-1390.patch
          37 kB
          Qian Xu

          Issue Links

            Activity

              People

              • Assignee:
                stanleyxu2005 Qian Xu
                Reporter:
                stanleyxu2005 Qian Xu
              • Votes:
                0 Vote for this issue
                Watchers:
                13 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: