-
Type:
Bug
-
Status: Resolved
-
Priority:
Major
-
Resolution: Fixed
-
Affects Version/s: None
-
Component/s: Spark Core
-
Labels:None
There are cases when the executor is not killed successfully by the worker.
One way this can happen is if the executor is in a bad state, fails to heartbeat and the master tells the worker to kill the executor. The executor is in such a bad state that the kill request is ignored. This seems to be able to happen if the executor is in heavy GC.
The cause of this is that the Process.destroy() API is not forceful enough. In Java8, a new API, destroyForcibly() was added. We should use that if available.