When I was reading a parquet file into RecordBatches using ParquetFileArrowReader that had row groups that were 100,000 rows in length with a batch size of 60,000, after reading 300,000 rows successfully, I started seeing this error
Upon investigation, I found that when reading with ParquetFileArrowReader, if the parquet input file has multiple row groups, and if a batch happens to end at the end of a row group for Int or Float, no subsequent row groups are read
A reproducer is attached. 20 values should be read by the ParquetFileArrowReader regardless of the batch size. However, when using batch sizes such as 5 or 3 (which fall on a boundary between row groups) not all the rows are read.
To run the reproducer, decompress the attachment parquet_file_arrow_reader.zip and do `cargo run`
The output is as follows:
The expected output is as follows (should always read 20 rows, regardless of the batch size):
Use a different batch size that will not fall on record batch boundaries