Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-42577

A large stage could run indefinitely due to executor lost

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.0.3, 3.1.3, 3.2.3, 3.3.2
    • 3.5.0
    • Spark Core
    • None

    Description

      When a stage is extremely large and Spark runs on spot instances or problematic clusters with frequent worker/executor loss,  the stage could run indefinitely due to task rerun caused by the executor loss. This happens, when the external shuffle service is on, and the large stages runs hours to complete, when spark tries to submit a child stage, it will find the parent stage - the large one, has missed some partitions, so the large stage has to rerun. When it completes again, it finds new missing partitions due to the same reason.

      We should add a attempt limitation for this kind of scenario.

      Attachments

        Activity

          People

            ivoson Tengfei Huang
            Ngone51 wuyi
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: