Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.1.0, 3.1.1, 3.2.0
-
None
Description
We currently have the ability for users to set the predicted time of the cluster manager or cloud provider to terminate a decommissioning executor, but for nodes where Spark it's self is triggering decommissioning we should add the ability of users to specify a maximum time we want to allow the executor to decommission.
This is important especially if we start to in more places (like with excluded executors that are found to be flaky, that may or may not be able to decommission successfully).