Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Fixed
-
0.15.0
Description
I realize that when I read up a lot of Parquet files using pyarrow.parquet.read_table(...), my program's memory usage becomes very bloated, although I don't keep the table objects after converting them to Pandas DFs.
You can try this in an interactive Python shell to reproduce this problem:
```
{python}from tqdm import tqdm
from pyarrow.parquet import read_table
PATH = '/tmp/big.snappy.parquet'
for _ in tqdm(range(10)):
read_table(PATH, use_threads=False, memory_map=False)
(note that I'm not assigning the read_table(...) result to anything, so I'm not creating any new objects at all)
```
During the For loop above, if you view the memory usage (e.g. using htop program), you'll see that it keeps creeping up. Either the program crashes during the 10 iterations, or if the 10 iterations complete, the program will still occupy a huge amount of memory, although no objects are kept. That memory is only released when you exit() from Python.
This problem means that my compute jobs using PyArrow currently need to use bigger server instances than I think is necessary, which translates to significant extra cost.
Attachments
Attachments
Issue Links
- links to