Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
1.2.0
-
None
-
None
Description
The resource requirements of an interactive shell varies heavily. Sometimes heavy commands are executed, and sometimes the user is thinking, getting coffee, interrupted etc...
A spark shell allocates a fixed number of worker cores (at least in standalone mode). A user thus has the choice to either block other users from the cluster by allocating all cores (default behavior), or restrict him/herself to only a few cores using the option --total-executor-cores. Either way the cores allocated to the shell has low utilization, since they will be waiting for the user a lot.
Instead the spark shell allocate resources directly required to run the driver, and request worker cores only when computation is performed on the RDDs.
This should allow for multiple users, to use an interactive shell concurrently while stille utilizing the entire cluster, when performing heavy operations.
Attachments
Issue Links
- duplicates
-
SPARK-4751 Support dynamic allocation for standalone mode
- Closed
- relates to
-
SPARK-4922 Support dynamic allocation for coarse-grained Mesos
- Closed
-
SPARK-3174 Provide elastic scaling within a Spark application
- Closed