Currently, in Spark SQL, the initial creation of schema can be classified into two groups. It is applicable to both Hive tables and Data Source tables:
Group A. Users specify the schema.
Case 1 CREATE TABLE AS SELECT: the schema is determined by the result schema of the SELECT clause. For example,
Case 2 CREATE TABLE: users explicitly specify the schema. For example,
Group B. Spark SQL infer the schema at runtime.
Case 3 CREATE TABLE. Users do not specify the schema but the path to the file location. For example,
Now, Spark SQL does not store the inferred schema in the external catalog for the cases in Group B. When users refreshing the metadata cache, accessing the table at the first time after (re-)starting Spark, Spark SQL will infer the schema and store the info in the metadata cache for improving the performance of subsequent metadata requests. However, the runtime schema inference could cause undesirable schema changes after each reboot of Spark.
It is desirable to store the inferred schema in the external catalog when creating the table. When users intend to refresh the schema, they issue `REFRESH TABLE`. Spark SQL will infer the schema again based on the previously specified table location and update/refresh the schema in the external catalog and metadata cache.