Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
3.1.2
-
None
-
None
Description
Currently the driver and executor memor requests always equals the limit.
As stated in SPARK-23825, this is a reasonable default and is especially important for the driver.
For executors however, it might be usefull for users to deviate from this default for executors.
In typical development environments on K8s, the namespace quotas are an upper bound to the memory request that is possible.
The limits however can be much higher. For development, spark is often run in client mode. While the driver should request the memory it needs, we want to leverage all the resources of the cluster whith executors if they are free - and can live with an executor maybe beeing killed eventually.
Thus I propose the introduction of {{spark.
{driver,executor}.limit.memory}} similar to the {{spark.{driver,executor}.limit.cpu}}.