Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Not A Problem
-
None
-
None
-
None
Description
The methods such as Dataset.show and take use Limit (CollectLimitExec) which leverages SparkPlan.executeTake to efficiently collect required number of elements back to the driver.
However, under wholestage codege, we usually release resources after all elements are consumed (e.g., HashAggregate). In this case, we will not release the resources and cause memory leak with Dataset.show, for example.
We can add task completion listener to HashAggregate to avoid the memory leak.
Attachments
Issue Links
- relates to
-
SPARK-18557 Downgrade the memory leak warning message
- Resolved
- links to