Uploaded image for project: 'Apache Arrow'
  1. Apache Arrow
  2. ARROW-8964

[Python][Parquet] improve reading of partitioned parquet datasets whose schema changed

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Duplicate
    • 0.17.1
    • None
    • Python
    • None
    • Ubuntu 18.04, latest miniconda with python 3.7, pyarrow 0.17.1

    Description

      Hi there, i'm encountering the following issue when reading from HDFS:

       

      My situation:

      I have a paritioned parquet dataset in HDFS, whose recent partitions contain parquet files with more columns than the older ones. When i try to read data using pyarrow.dataset.dataset and filter on recent data, i still get only the columns that are also contained in the old parquet files. I'd like to somehow merge the schema or use the schema from parquet files from which data ends up being loaded.

      when using:

      `pyarrow.dataset.dataset(path_to_hdfs_directory, paritioning = 'hive', filters = my_filter_expression).to_table().to_pandas()`

      Is there please a way to handle schema changes in a way, that the read data would contain all columns?

      everything works fine when i copy the needed parquet files into a separate folder, however it is very inconvenient way of working. 

       

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              1ira Ira Saktor
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: