The Spark standalone cluster master does not balance drivers running in cluster mode across all the available workers. Instead, it assigns each submitted driver to the first available worker. The schedule() method attempts to randomly shuffle the HashSet of workers before launching drivers, but that operation has no effect because the Scala HashSet is an unordered data structure. This behavior is a regression introduced by
SPARK-1706: previously, the workers were copied into an ordered list before the random shuffle is performed.
I am able to reproduce this bug in all releases of Spark from 1.4.0 to 1.6.1 using the following steps:
- Start a standalone master and two workers
- Repeatedly submit applications to the master in cluster mode (--deploy-mode cluster)
Observe that all the drivers are scheduled on only one of the two workers as long as resources are available on that worker. The expected behavior is that the master randomly assigns drivers to both workers.