Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-13803

Standalone master does not balance cluster-mode drivers across workers

    XMLWordPrintableJSON

Details

    Description

      The Spark standalone cluster master does not balance drivers running in cluster mode across all the available workers. Instead, it assigns each submitted driver to the first available worker. The schedule() method attempts to randomly shuffle the HashSet of workers before launching drivers, but that operation has no effect because the Scala HashSet is an unordered data structure. This behavior is a regression introduced by SPARK-1706: previously, the workers were copied into an ordered list before the random shuffle is performed.

      I am able to reproduce this bug in all releases of Spark from 1.4.0 to 1.6.1 using the following steps:

      1. Start a standalone master and two workers
      2. Repeatedly submit applications to the master in cluster mode (--deploy-mode cluster)

      Observe that all the drivers are scheduled on only one of the two workers as long as resources are available on that worker. The expected behavior is that the master randomly assigns drivers to both workers.

      Attachments

        Activity

          People

            codingcat Nan Zhu
            bwongchaowart Brian Wongchaowart
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: