Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-13002

Mesos scheduler backend does not follow the property spark.dynamicAllocation.initialExecutors

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.5.2, 1.6.0
    • 2.0.0
    • Mesos

    Description

      When starting a Spark job on a Mesos cluster, all available cores are reserved (up to spark.cores.max), creating one executor per Mesos node, and as many executors as needed.
      This is the case even when dynamic allocation is enabled.

      When dynamic allocation is enabled, the number of executor launched at startup should be limited to the value of spark.dynamicAllocation.initialExecutors.

      The Mesos scheduler backend already follows the value computed by the ExecutorAllocationManager for the number of executors that should be up and running. Expect at startup, when it just creates all the executors it can.

      Attachments

        Issue Links

          Activity

            People

              skyluc Luc Bourlier
              skyluc Luc Bourlier
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: