Details
-
Improvement
-
Status: Closed
-
Critical
-
Resolution: Duplicate
-
None
Description
Hello,
Posting this from github (master wesmckinn asked for it )
https://github.com/apache/arrow/issues/2138
import pandas as pd import numpy as np import pyarrow.parquet as pq import pyarrow as pa idx = pd.date_range('2017-01-01 12:00:00.000', '2017-03-01 12:00:00.000', freq = 'T') dataframe = pd.DataFrame({'numeric_col' : np.random.rand(len(idx)), 'string_col' : pd.util.testing.rands_array(8,len(idx))}, index = idx)
df["dt"] = df.index df["dt"] = df["dt"].dt.date table = pa.Table.from_pandas(df) pq.write_to_dataset(table, root_path='dataset_name', partition_cols=['dt'], flavor='spark')
this works but is inefficient memory-wise. The arrow table is a copy of the large pandas daframe and quickly saturates the RAM.
Thanks!
Attachments
Issue Links
- is duplicated by
-
ARROW-2628 [Python] parquet.write_to_dataset is memory-hungry on large DataFrames
- Open
- links to