Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-22354

--executor-cores in spark-submit fails to set "spark.executor.cores" for mesos workers

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 2.2.0
    • Fix Version/s: None
    • Component/s: Mesos
    • Labels:
      None
    • Environment:

      Mesos 1.0.1
      Spark 2.2.0

      Description

      We are running spark in cluster-mode and limit the amount of CPU and memory per executor so that many executors spin up per mesos worker.

      When we specify --executor-cores 1 in the spark-submit command to the dispatcher, Mesos allocates only one CPU for the workers, but Spark itself thinks that it hasas many CPUs as are available for each worker and so only one spark worker starts per mesos worker. If we explicitly set --conf "spark.executor.cores=1" then this problem goes away and many spark workers spin up for each mesos worker.

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              dmcwhorter David McWhorter
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated: