Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
4.0.0
Description
Create an example parquet table with partitions and insert data in Spark:
create table t(col1 string, col2 string, col3 string) using parquet location 'some/path/parquet-test' partitioned by (col1, col2); insert into t (col1, col2, col3) values ('a', 'b', 'c');
Go into the parquet-test path in the filesystem and try to copy parquet data file from path col1=a/col2=b directory into col1=a. After that, try to create new table based on parquet data in Spark:
create table broken_table using parquet location 'some/path/parquet-test';
This query errors with internal error. Stack trace excerpts:
org.apache.spark.SparkException: [INTERNAL_ERROR] Eagerly executed command failed. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace. SQLSTATE: XX000 ... Caused by: java.lang.AssertionError: assertion failed: Conflicting partition column names detected: Partition column name list #0: col1 Partition column name list #1: col1, col2For partitioned table directories, data files should only live in leaf directories. And directories at the same level should have the same partition column name. Please check the following directories for unexpected files or inconsistent partition column names: file:some/path/parquet-test/col1=a file:some/path/parquet-test/col1=a/col2=b at scala.Predef$.assert(Predef.scala:279) at org.apache.spark.sql.execution.datasources.PartitioningUtils$.resolvePartitions(PartitioningUtils.scala:391) ...
Attachments
Issue Links
- links to