Description
In ANSI mode, schema string parsing should fail if the schema uses ANSI reserved keyword as attribute name:
spark.conf.set("spark.sql.ansi.enabled", "true") spark.sql("""select from_json('{"time":"26/10/2015"}', 'time Timestamp', map('timestampFormat', 'dd/MM/yyyy'));""").show output: Cannot parse the data type: no viable alternative at input 'time'(line 1, pos 0) == SQL == time Timestamp ^^^
But this query may accidentally succeed in certain cases cause the DataType parser sticks to the configs of the first created session in the current thread:
DataType.fromDDL("time Timestamp") val newSpark = spark.newSession() newSpark.conf.set("spark.sql.ansi.enabled", "true") newSpark.sql("""select from_json('{"time":"26/10/2015"}', 'time Timestamp', map('timestampFormat', 'dd/MM/yyyy'));""").show output: +--------------------------------+ |from_json({"time":"26/10/2015"})| +--------------------------------+ | {2015-10-26 00:00...| +--------------------------------+