Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
2.0.0
-
None
Description
As part of SPARK-14118, Spark SQL removed support for sending ALTER TABLE CHANGE COLUMN commands to Hive. This restriction was loosened in https://github.com/apache/spark/pull/12714 to allow for those commands if they only change the column comment.
Wikimedia has been evolving Parquet backed Hive tables with data originally from JSON events by adding newly found columns to the Hive table schema, via a Spark job we call 'Refine'. We do this by recursively merging an input DataFrame schema with a Hive table DataFrame schema, finding new fields, and then issuing an ALTER TABLE statement to add the columns. However, because we allow for nested data types in the incoming JSON data, we make extensive use of struct type fields. In order to add newly detected fields in a nested data type, we must alter the struct column and append the nested struct field. This requires CHANGE COLUMN that alters the column type. In reality, the 'type' of the column is not changing, it just just a new field being added to the struct, but to SQL, this looks like a type change.
We were about to upgrade to Spark 2 but this new restriction in SQL DDL that can be sent to Hive will block us. I believe this is fixable by adding an exception in command/ddl.scala to allow ALTER TABLE CHANGE COLUMN with a new type, if the original type and destination type are both struct types, and the destination type only adds new fields.
Attachments
Issue Links
- is related to
-
SPARK-26519 spark sql CHANGE COLUMN not working
-
- Resolved
-
- relates to
-
SPARK-14118 Implement DDL/DML commands for Spark 2.0
-
- Resolved
-
- links to