Description
PySpark's daemon-based worker factory has a very complicated process structure that I've always found confusing. The per-java-worker daemon.py process launches a numCores-sized pool of subprocesses, and those subprocesses launching the actual worker processes that process data.
I think we can simplify this by having daemon.py launch the workers directly without this extra layer of indirection. See my comments on the pull request that introduced daemon.py: https://github.com/mesos/spark/pull/563