Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-1706

Allow multiple executors per worker in Standalone mode

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 1.0.0
    • 1.4.0
    • Deploy
    • None

    Description

      Right now if people want to launch multiple executors on each machine they need to start multiple standalone workers. This is not too difficult, but it means you have extra JVM's sitting around.

      We should just allow users to set a number of cores they want per-executor in standalone mode and then allow packing multiple executors on each node. This would make standalone mode more consistent with YARN in the way you request resources.

      It's not too big of a change as far as I can see. You'd need to:

      1. Introduce a configuration for how many cores you want per executor.
      2. Change the scheduling logic in Master.scala to take this into account.
      3. Change CoarseGrainedSchedulerBackend to not assume a 1<->1 correspondence between hosts and executors.

      And maybe modify a few other places.

      Attachments

        Activity

          People

            codingcat Nan Zhu
            pwendell Patrick Wendell
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: