Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-9932 Data source API improvement (Spark 1.6)
  3. SPARK-10146

Have an easy way to set data source reader/writer specific confs

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • None
    • 2.0.0
    • SQL
    • None

    Description

      Right now, it is hard to set data source reader/writer specifics confs correctly (e.g. parquet's row group size). Users need to set those confs in hadoop conf before start the application or through org.apache.spark.deploy.SparkHadoopUtil.get.conf at runtime. It will be great if we can have an easy to set those confs.

      Attachments

        Activity

          People

            yhuai Yin Huai
            yhuai Yin Huai
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: