Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-19560

Improve tests for when DAGScheduler learns of "successful" ShuffleMapTask from a failed executor

    XMLWordPrintableJSON

Details

    • Test
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • 2.1.1
    • None
    • Scheduler, Spark Core
    • None

    Description

      There's some tricky code around the case when the DAGScheduler learns of a ShuffleMapTask that completed successfully, but ran on an executor that failed sometime after the task was launched. This case is tricky because the TaskSetManager (i.e., the lower level scheduler) thinks the task completed successfully, but the DAGScheduler considers the output it generated to be no longer valid (because it was probably lost when the executor was lost). As a result, the DAGScheduler needs to re-submit the stage, so that the task can be re-run. This is tested in some of the tests but not clearly documented, so we should improve this to prevent future bugs (this was encountered by markhamstra in attempting to find a better fix for SPARK-19263).

      Attachments

        Activity

          People

            kayousterhout Kay Ousterhout
            kayousterhout Kay Ousterhout
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: