Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-9260

Standalone scheduling can overflow a worker with cores

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 1.4.0
    • 1.4.2, 1.5.0
    • Deploy
    • None

    Description

      If the cluster is started with `spark.deploy.spreadOut = false`, then we may allocate more cores than is available on a worker. E.g. a worker has 8 cores, and an application sets `spark.cores.max = 10`, then we end up with the following screenshot:

      Attachments

        Activity

          People

            nravi Nishkam Ravi
            andrewor14 Andrew Or
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: