PARQUET-251, BINARY columns in existing Parquet files may be written with corrupted statistics information. This information is used by filter push-down optimization. Since Spark 1.5 turns on Parquet filter push-down by default, we may end up with wrong query results. PARQUET-251 has been fixed in parquet-mr 1.8.1, but Spark 1.5 is still using 1.7.0.
Note that this kind of corrupted Parquet files could be produced by any Parquet data models.
This affects all Spark SQL data types that can be mapped to Parquet BINARY, namely:
- DecimalType (but Spark SQL doesn't support pushing down DecimalType columns for now.)
To avoid wrong query results, we should disable filter push-down for columns of StringType and BinaryType until we upgrade to parquet-mr 1.8.