Details
-
Sub-task
-
Status: Closed
-
Major
-
Resolution: Fixed
-
1.1.0
-
None
Description
To support dynamic scaling of a Spark application, Spark's scheduler will need to have hooks around explicitly decommissioning executors. We'll also need basic heuristics governing when to start/stop executors based on load. An initial goal is to keep this very simple.