Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Won't Fix
-
1.1.0
-
None
-
None
Description
The parquet snappy codec allocates off-heap buffers for decompression[1]. In one cases the observed size of these buffers was high enough to add several GB of data to the overall virtual memory usage of the Spark executor process. I don't understand enough about our use of Snappy to fully grok how much data we would expect to be present in these buffers at any given time, but I can say a few things.
1. The dataset had individual rows that were fairly large, e.g. megabytes.
2. Direct buffers are not cleaned up until GC events, and overall there was not much heap contention. So maybe they just weren't being cleaned.
I opened PARQUET-118 to see if they can provide an option to use on-heap buffers for decompression. In the mean time, we could consider changing the default back to gzip, or we could do nothing (not sure how many other users will hit this).
Attachments
Issue Links
- is broken by
-
PARQUET-118 Provide option to use on-heap buffers for Snappy compression/decompression
- Open