Looks like Arrow infer type for the first batch and apply it for all subsequent batches. But information might be not enough to infer the type correctly for the whole file. For our particular case, Arrow infers some field in the schema as date32 from the first batch but the next batch has an empty field value that can’t be converted to date32.
When I increase the batch size to have such a value in the first batch Arrow set string type (not sure why not nullable date32) for such a field since it can’t be converted to date32 and the whole file is read successfully.
This problem can be easily reproduced by using the following code and attached dataset:
When we use block_size `10_000_000` file can be read successfully since we have the problematic value in the first batch.
An error occurs when I try to attach dataset, so you can download it from Google Drive here