Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-9353

Standalone scheduling memory requirement incorrect if cores per executor is not set

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 1.5.0
    • 1.4.2, 1.5.0
    • Deploy
    • None

    Description

      I tried to come up with a more succinct title.

      The issue only happens if `spark.executor.cores` is not set. Right now if we have a worker with 8G, and we set `spark.executor.memory` to 1G, then the executor launched on the worker can have at most 8 cores, even if the worker has more cores available.

      This is caused by the fix in SPARK-8881.

      Attachments

        Activity

          People

            andrewor14 Andrew Or
            andrewor14 Andrew Or
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: