Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.4.0
-
None
Description
Currently if the memory overhead is not provided for a Yarn job, it defaults to 10% of the respective driver/executor memory. This 10% is hard-coded and the only way to increase memory overhead is to set the exact memory overhead. We have seen more than 10% memory being used, and it would be helpful to be able to set the default overhead factor so that the overhead doesn't need to be pre-calculated for any driver/executor memory size.
Attachments
Issue Links
- causes
-
SPARK-39363 fix spark.kubernetes.memoryOverheadFactor deprecation warning
- In Progress
- links to
(2 links to)