Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.3.0
-
None
Description
Steps to reproduce:
1) Start a Spark cluster, and check the storage memory value from the Spark Web UI "Executors" tab (it should be equal to zero if you just started)
2) Run:
val df = spark.sqlContext.range(1, 1000000000)
df.cache()
df.count()
df.unpersist(true)
3) Check the storage memory value again, now it's equal to 1GB
Looks like the memory is actually released, but stats aren't updated. This issue makes cluster management more complicated.
Attachments
Attachments
Issue Links
- is duplicated by
-
SPARK-25091 UNCACHE TABLE, CLEAR CACHE, rdd.unpersist() does not clean up executor memory
- Resolved
- links to