Details
Description
Suppose I have a catalog table with schema StructType(Seq(StructField("a", StructType(Seq(StructField("b", DataTypes.StringType), StructField("c", DataTypes.StringType))).
Suppose I now try to append a record to it:
{"a": {"c": "data1", "b": "data2"}}
That record will actually be appended as:
{"a": {"b": "data1", "c": "data2"}}
which is obviously not close to what the user wanted or expected (for me it silently corrupted my data).
It turns out that the user could provide a totally different record,
{"a": {"this column": "is totally different", "but": "the types match up"}}
and it'd still get written out, but as
{"a": {"b": "is totally different", "c": "the types match up"}}
This is because in DDLPreprocessingUtils.castAndRenameOutput ,,] Spark puts effort in to reordering column names in line with what the output expects, but merely casts any other types. This works nicely in a case where you try to e.g. write an int into a double field, but goes wrong on complex datatypes if the types match up but the field names do not.