Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.0.3, 3.1.3, 3.2.3, 3.3.2
-
None
Description
When a stage is extremely large and Spark runs on spot instances or problematic clusters with frequent worker/executor loss, the stage could run indefinitely due to task rerun caused by the executor loss. This happens, when the external shuffle service is on, and the large stages runs hours to complete, when spark tries to submit a child stage, it will find the parent stage - the large one, has missed some partitions, so the large stage has to rerun. When it completes again, it finds new missing partitions due to the same reason.
We should add a attempt limitation for this kind of scenario.