Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-39955

Improve LaunchTask process to avoid Stage failures caused by fail-to-send LaunchTask messages

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.4.0
    • 3.4.0
    • Spark Core
    • None

    Description

      There are two possible reasons, including Network Failure and Task Failure, to make RPC failures.

      (1) Task Failure: The network is good, but the task causes the executor's JVM crash. Hence, RPC fails.
      (2) Network Failure: The executor works well, but the network between Driver and Executor is broken. Hence, RPC fails.

      We should handle these two different kinds of failure in different ways. First, if the failure is Task Failure, we should increment the variable `numFailures`. If the value of `numFailures` is larger than a threshold, Spark will label the job failed. Second, if the failure is Network Failure, we will not increment the variable `numFailures`. We will just assign the task to a new executor. Hence, the job will not be recognized as failed due to Network Failure.

      However, currently, Spark recognizes every RPC failure as Task Failure. Hence, it will cause extra Spark job failures.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            khchen Kai-Hsun Chen
            khchen Kai-Hsun Chen
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment