Details
-
Bug
-
Status: Resolved
-
Blocker
-
Resolution: Fixed
-
None
-
None
Description
Consider a scenario where storage memory usage has grown past the size of the unevictable storage region (spark.memory.storageFraction * maxMemory) and a task needs to acquire more execution memory by reclaiming evictable storage memory. If the storage memory usage is less than maxMemory, then there's a possibility that no storage blocks will be evicted. This is caused by how MemoryStore.ensureFreeSpace() is called inside of StorageMemoryPool.shrinkPoolToReclaimSpace().
Here's a failing regression test which demonstrates this bug: https://github.com/apache/spark/commit/b519fe628a9a2b8238dfedbfd9b74bdd2ddc0de4?diff=unified#diff-b3a7cd2e011e048908d70f743c0ed7cfR155
Attachments
Issue Links
- blocks
-
SPARK-12155 Execution OOM after a relative large dataset cached in the cluster.
- Resolved
- is blocked by
-
SPARK-12189 UnifiedMemoryManager double counts storage memory freed
- Resolved
- links to