There's a race in the shutdown path of CoarseGrainedExecutorBackend that may lead to the process exiting with the wrong status. When the race triggers, you can see things like this in the driver logs in yarn-cluster mode:
This happens because the user class is still running after the SparkContext is shut down, so the YarnAllocator instance is alive for long enough to fetch the exit status of the container. If the race is triggered, the container exits with the wrong status. In this case, enough containers hit the race that the application ended up failing due to too many container failures, even though the app would probably succeed otherwise.
The race is as follows:
- CoarseGrainedExecutorBackend receives a StopExecutor
- Before it can enqueue a "Shutdown" message, the socket is disconnected and NettyRpcEnv enqueues a "RemoteProcessDisconnected" message
- "RemoteProcessDisconnected" is processed first, and calls "System.exit" with wrong exit code for this case.
You can see that in the executor logs: both messages are being processed.
The code needs to avoid this situation by ignoring the disconnect event if it's already shutting down.