Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Not A Problem
-
2.2.0
-
None
-
None
Description
When user set some hadoop configuration in spark-defaults.conf, for instance:
spark.hadoop.mapreduce.input.fileinputformat.split.maxsize 100000
and then user use the spark-sql and use set command to overwrite this configuration, but it can not cover the value which set in the file of spark-defaults.conf.