In reverse proxy mode, Sparks exhausts the Jetty thread pool if the master node has too many cpus or the cluster has too many executers:
For each ProxyServlet, Jetty creates Selector threads: minimum 4, maximum half the number of available CPUs:
this(Math.max(1, Runtime.getRuntime().availableProcessors() / 2));
In reverse proxy mode, a proxy servlet is set up for each executor.
I have a system with 7 executors and 88 CPUs on the master node. Jetty tries to instantiate 7*44 = 309 selector threads just for the reverse proxy servlets, but since the QueuedThreadPool is initialized with 200 threads by default, the UI gets stuck.
I have patched JettyUtils.scala to extend the thread pool ( val pool = new QueuedThreadPool(400)). With this hack, the UI works.
Obviously, the Jetty defaults are meant for a real web server. If that has 88 CPUs, you do certainly expect a lot of traffic.
For the Spark admin UI however, there will rarely be concurrent accesses for the same application or the same executor.
I therefore propose to dramatically reduce the number of selector threads that get instantiated - at least by default.
I will propose a fix in a pull request.