Description
Spark SQL doesn't support creating partitioned table using Hive CTAS in SQL syntax. However it is supported by using DataFrameWriter API.
val df = Seq(("a", 1)).toDF("part", "id") df.write.format("hive").partitionBy("part").saveAsTable("t")
Hive begins to support this in newer version: https://issues.apache.org/jira/browse/HIVE-20241:
CREATE TABLE t PARTITIONED BY (part) AS SELECT 1 as id, "a" as part
To match DataFrameWriter API, we should this support to SQL syntax.
Attachments
Issue Links
- links to