Description
Right now, it is hard to set data source reader/writer specifics confs correctly (e.g. parquet's row group size). Users need to set those confs in hadoop conf before start the application or through org.apache.spark.deploy.SparkHadoopUtil.get.conf at runtime. It will be great if we can have an easy to set those confs.