Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
3.5.2, 3.4.4
-
None
Description
Whole allocated memory to JVM is set into pod resources as both request and limits.
This means there's not a way to use more memory for burst-like jobs in a shared environment.
For example, if spark job uses external process (outside of JVM) to access data, a bit of extra memory required for that, and having configured higher limits for mem could be of use.
Another thought here - have a way to configure different JVM/ pod memory request also could be a valid use case.
Github PR: https://github.com/apache/spark/pull/41067
Attachments
Issue Links
- links to