Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.6.0
-
None
Description
Spark runs tasks in a thread pool that uses daemon threads in each executor. That means that when the JVM gets a signal to shut down, those tasks keep running.
Now when YARN preempts an executor, it sends a SIGTERM to the process, triggering the JVM shutdown. That causes shutdown hooks to run which may cause user code running in those tasks to fail, and report task failures to the driver. Those failures are then counted towards the maximum number of allowed failures, even though in this case we don't want that because the executor was preempted.
So we need a better way to handle that situation.