Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-24374

SPIP: Support Barrier Execution Mode in Apache Spark

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

    Details

    • Type: Epic
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.4.0
    • Fix Version/s: 2.4.0
    • Component/s: ML, Spark Core
    • Labels:
    • Epic Name:
      Support Barrier Execution Mode
    • Target Version/s:

      Description

      (See details in the linked/attached SPIP doc.)

      The proposal here is to add a new scheduling model to Apache Spark so users can properly embed distributed DL training as a Spark stage to simplify the distributed training workflow. For example, Horovod uses MPI to implement all-reduce to accelerate distributed TensorFlow training. The computation model is different from MapReduce used by Spark. In Spark, a task in a stage doesn’t depend on any other tasks in the same stage, and hence it can be scheduled independently. In MPI, all workers start at the same time and pass messages around. To embed this workload in Spark, we need to introduce a new scheduling model, tentatively named “barrier scheduling”, which launches tasks at the same time and provides users enough information and tooling to embed distributed DL training. Spark can also provide an extra layer of fault tolerance in case some tasks failed in the middle, where Spark would abort all tasks and restart the stage.

        Attachments

        Issue Links

          Activity

            People

            • Assignee:
              mengxr Xiangrui Meng
              Reporter:
              mengxr Xiangrui Meng
              Shepherd:
              Reynold Xin

              Dates

              • Created:
                Updated:
                Resolved:

                Issue deployment