-
Type:
Improvement
-
Status: Resolved
-
Priority:
Major
-
Resolution: Fixed
-
Affects Version/s: 1.0.0
-
Component/s: Spark Core
-
Labels:None
For compatibility with older versions of Spark it would be nice to have an option `spark.hadoop.validateOutputSpecs` (default true) and a description "If set to true, validates the output specification used in saveAsHadoopFile and other variants. This can be disabled to silence exceptions due to pre-existing output directories."
This would just wrap the checking done in this PR:
https://issues.apache.org/jira/browse/SPARK-1100
https://github.com/apache/spark/pull/11
By first checking the spark conf.
- relates to
-
SPARK-1993 Let users skip checking output directory
-
- Resolved
-
-
SPARK-2039 Run hadoop output checks for all formats
-
- Resolved
-
- links to