Description
The java process runnig "spark.executor.StandaloneExecutorBackend" fails to end after a task is finished.
Under Max OS X and Unix, there is a single shell script "run" to start Spark master, worker, and executor. Under Windows, there is a cascade: "run.cmd" calls "run2.cmd" which calls java. So when the spark.deploy.worker.ExecutorRunner (which runs in the worker process) wants to kill the executor process via process.destroy(), it actually only kills the process of "run.cmd", and the process of "run2.cmd" (=> java running the executor) stays alive.
See this thread on spark-users for all details: https://groups.google.com/forum/#!topic/spark-users/NrdhVlrUDtU/discussion