Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Won't Fix
-
2.2.0
-
None
-
None
Description
In Spark 2.2.0, `spark.sql.hive.caseSensitiveInferenceMode` has a critical issue.
SPARK-19611uses `INFER_AND_SAVE` at 2.2.0 since Spark 2.1.0 breaks some Hive tables backed by case-sensitive data files.This situation will occur for any Hive table that wasn't created by Spark or that was created prior to Spark 2.1.0. If a user attempts to run a query over such a table containing a case-sensitive field name in the query projection or in the query filter, the query will return 0 results in every case.
- However,
SPARK-22306reports this also corrupts Hive Metastore schema by removing bucketing information (BUCKETING_COLS, SORT_COLS) and changing owner.
- Since Spark 2.3.0 supports Bucketing, BUCKETING_COLS and SORT_COLS look okay at least. However, we need to figure out the issue of changing owners. Also, we cannot backport bucketing patch into `branch-2.2`. We need more tests on before releasing 2.3.0.
Hive Metastore is a shared resource and Spark should not corrupt it by default. This issue proposes to recover that option back to `NEVER_INFO` like Spark 2.2.0 by default. Users can take a risk by enabling `INFER_AND_SAVE` by themselves.
Attachments
Issue Links
- is superceded by
-
SPARK-22306 INFER_AND_SAVE overwrites important metadata in Parquet Metastore table
- Resolved
- relates to
-
SPARK-22306 INFER_AND_SAVE overwrites important metadata in Parquet Metastore table
- Resolved
-
SPARK-19611 Spark 2.1.0 breaks some Hive tables backed by case-sensitive data files
- Resolved
- links to