Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
0.9.0
-
None
Description
When currently saving a ParquetDataset from Pandas, we don't get consistent schemas, even if the source was a single DataFrame. This is due to the fact that in some partitions object columns like string can become empty. Then the resulting Arrow schema will differ. In the central metadata, we will store this column as pa.string whereas in the partition file with the empty columns, this columns will be stored as pa.null.
The two schemas are still a valid match in terms of schema evolution and we should respect that in https://github.com/apache/arrow/blob/79a22074e0b059a24c5cd45713f8d085e24f826a/python/pyarrow/parquet.py#L754 Instead of doing a pa.Schema.equals in https://github.com/apache/arrow/blob/79a22074e0b059a24c5cd45713f8d085e24f826a/python/pyarrow/parquet.py#L778 we should introduce a new method pa.Schema.can_evolve_to that is more graceful and returns True if a dataset piece has a null column where the main metadata states a nullable column of any type.
Attachments
Attachments
Issue Links
- depends upon
-
ARROW-8039 [Python][Dataset] Support using dataset API in pyarrow.parquet with a minimal ParquetDataset shim
- Resolved
-
ARROW-9147 [C++][Dataset] Support null -> other type promotion in Dataset scanning
- Resolved
- is related to
-
ARROW-2860 [Python][Parquet][C++] Null values in a single partition of Parquet dataset, results in invalid schema on read
- Open
-
ARROW-2366 [Python][C++][Parquet] Support reading Parquet files having a permutation of column order
- Resolved