Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.3.4, 2.4.4
-
None
Description
When we fetch results from executors and found the total size has exceeded the maxResultSize configured, Spark will simply abort the stage and all dependent jobs. But the task triggered this is actually successful, but never post out `TaskEnd` event, as a result it will never be removed from `CoarseGrainedSchedulerBackend`. If dynamic allocation is enabled, there will be zombie executor(s) remaining in resource manager, it will never die until application ends.
Attachments
Issue Links
- links to