Description
Spark 2.2 is unable to read the partitioned table created by Spark 2.1 when the table schema does not put the partitioning column at the end of the schema.
assert(partitionFields.map(_.name) == partitionColumnNames)
The codes are from the following files:
When reading the table metadata from the metastore, we also need to reorder the columns.