Description
We currently expose both Hadoop configuration and Spark SQL configuration in RuntimeConfig. I think we can remove the Hadoop configuration part, and simply generate Hadoop Configuration on the fly by passing all the SQL configurations into it. This way, there is a single interface (in Java/Scala/Python/SQL) for end-users.
Attachments
Issue Links
- is duplicated by
-
SPARK-12307 ParquetFormat options should be exposed through the DataFrameReader/Writer options API
- Resolved
- relates to
-
SPARK-13912 spark.hadoop.* configurations are not applied for Parquet Data Frame Readers
- Resolved
- links to