Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
4.0.0
Description
Currently there is no task on/off heap execution memory metrics. There is a peakExecutionMemory metrics, however, the semantic is a confusing: it only cover the execution memory used by shuffle/join/aggregate/sort, which is accumulated in specific operators.
We can easily maintain the whole task-level peak memory in TaskMemoryManager, assuming acquireExecutionMemory is the only one narrow waist for acquiring execution memory.
Also it's nice to cleanup/deprecate that poorly-named `peakExecutionMemory`.
Creating two followup sub tickets:
- https://issues.apache.org/jira/browse/SPARK-48788 :accumulate task metrics in stage, and display in Spark UI
- https://issues.apache.org/jira/browse/SPARK-48789 : deprecate `peakExecutionMemory` once we have replacement for it.
Attachments
Issue Links
- causes
-
SPARK-49228 Investigate ExternalAppendOnlyUnsafeRowArrayBenchmark
- Resolved
- is depended upon by
-
SPARK-48788 Accumulate task-level execution memory in stage and display in Spark UI
- Open
-
SPARK-48789 Deprecate peakExecutionMemory metrics
- Open
- links to