Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
Description
This is a more targeted fix to improve memory usage when scanning parquet files. It is related to broader issues like ARROW-14648 but those will likely take longer to fix. The goal here is to make it possible to scan large parquet datasets with many files where each file has reasonably sized row groups (e.g. 1 million rows). Currently we run out of memory scanning a configuration as simple as:
21 parquet files
Each parquet file has 10 million rows split into row groups of size 1 million
Attachments
Issue Links
- is depended upon by
-
ARROW-15411 [C++][Datasets] Improve memory usage of datasets
- Open
- links to