Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.2.0
-
None
Description
killExecutor api currently does not allow killing an executor without updating the total number of executors needed. In case of dynamic allocation is turned on and the allocator tries to kill an executor, the scheduler reduces the total number of executors needed ( see https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala#L635) which is incorrect because the allocator already takes care of setting the required number of executors itself.
Attachments
Issue Links
- is duplicated by
-
SPARK-22598 ExecutorAllocationManager does not requests new executors when executor fail and target has not change
- Closed
- relates to
-
SPARK-23365 DynamicAllocation with failure in straggler task can lead to a hung spark job
- Resolved
- links to