In parquet-mr 1.8.1, constructing empty GroupType (and thus MessageType) is not allowed anymore (see
PARQUET-278). This change makes sense in most cases since Parquet doesn't support empty groups. However, there is one use case where an empty MessageType is valid, namely passing an empty MessageType as the requestedSchema constructor argument of ReadContext when counting rows in a Parquet file. The reason why it works is that, Parquet can retrieve row count from block metadata without materializing any columns. Take the following PySpark shell snippet (1.5-SNAPSHOT, which uses parquet-mr 1.7.0) as an example:
Parquet related log lines:
We can see that Spark SQL passes no requested columns to the underlying Parquet reader. What happens here is that:
- Spark SQL creates a CatalystRowConverter with zero converters (and thus only generates empty rows).
- InternalParquetRecordReader first obtain the row count from block metadata (here).
- MessageColumnIO returns an EmptyRecordRecorder for reading the Parquet file (here).
- InternalParquetRecordReader.nextKeyValue() is invoked n times, where n equals to the row count. Each time, it invokes the converter created by Spark SQL and produces an empty Spark SQL row object.
This issue is also the cause of HIVE-11611. Because when upgrading to Parquet 1.8.1, Hive worked around this issue by using tableSchema as requestedSchema when no columns are requested (here). IMO this introduces a performance regression in cases like counting, because now we need to materialize all columns just for counting.