Description
Inserting data into Hive tables has its own implementation that is distinct from data sources: InsertIntoHiveTable, SparkHiveWriterContainer and SparkHiveDynamicPartitionWriterContainer.
I think it should be possible to unify these with data source implementations InsertIntoHadoopFsRelationCommand. We can start by implementing an OutputWriterFactory/OutputWriter that uses Hive's serdes to write data.
Note that one other major difference is that data source tables write directly to the final destination without using some staging directory, and then Spark itself adds the partitions/tables to the catalog. Hive tables actually write to some staging directory, and then call Hive metastore's loadPartition/loadTable function to load those data in.