Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.3.0
Description
`ColumnVectorUtils.populate()` does not handle CalendarInterval type correctly - https://github.com/apache/spark/blob/master/sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/ColumnVectorUtils.java#L93-L94 . The CalendarInterval type is in the format of (months: int, days: int, microseconds: long) (https://github.com/apache/spark/blob/master/common/unsafe/src/main/java/org/apache/spark/unsafe/types/CalendarInterval.java#L58 ). However, the function above misses `days` field, and sets `microseconds` field in wrong position.
`ColumnVectorUtils.populate()` is used by Parquet (https://github.com/apache/spark/blob/master/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedParquetRecordReader.java#L258 ) and ORC (https://github.com/apache/spark/blob/master/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnarBatchReader.java#L171 )vectorized reader to read partition column. So technically Spark can potentially produce wrong result if reading table with CalendarInterval partition column. However I also notice Spark explicitly disallows writing data with CalendarInterval type (https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala#L586 ), so it might not be a big deal for users. But it's worth to fix anyway.
Caveat: I found the bug when reading through the related code path, and I don't have experience in production for partition column with CalendarInterval type. I think it should be an obvious fix unless anyone more experienced could find some historical context. The code was introduced a long time ago where I couldn't find any more info why it was implemented as it is (https://github.com/apache/spark/pull/11435 )