Details
-
Improvement
-
Status: Triage Needed
-
P2
-
Resolution: Fixed
-
None
-
None
Description
This is a common request from users and we did not do it in the past because we tried to avoid Hadoop objects in ParquetIO's public API. However there are valid reasons to do it:
1. Many functionalities of Parquet are configurable via public helper methods on Parquet that prepare data inside of Hadoop's Configuration object, e.g. Column Projection via `AvroReadSupport.setRequestedProjection(conf, projectionSchema);` or Predicate Filters via `ParquetInputFormat.setFilterPredicate(sc.hadoopConfiguration(), filterPredicate);`. Giving access to those would allow power users to do advanced stuff without any maintenance on the IO side.
2. The main reason to avoid the Hadoop Configuration object was to align with future non Hadoop required APIs on Parquet see PARQUET-1126 for details but this does not seem that will happen soon.
Attachments
Issue Links
- is related to
-
BEAM-10284 Allow to pass config to Parquet sink
- Resolved
-
BEAM-11527 Support user configurable Hadoop Configuration flags for ParquetIO
- Triage Needed
- links to