Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-20163

Kill all running tasks in a stage in case of fetch failure

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Duplicate
    • 2.0.1
    • None
    • Scheduler, Spark Core
    • None

    Description

      Currently, the scheduler does not kill the running tasks in a stage when it encounters fetch failure, as a result, we might end up running many duplicate tasks in the cluster. There is already a TODO in TaskSetManager to kill all running tasks which has not been implemented.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              sitalkedia@gmail.com Sital Kedia
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: