Here's a simplification of a deadlock that can occur a shutdown if the user app has also installed a shutdown hook to clean up:
- Spark Shutdown Hook thread runs
- SparkShutdownHookManager.runAll() is invoked, locking SparkShutdownHookManager as it is synchronized
- A user shutdown hook thread runs
- User hook tries to call, for example StreamingContext.stop(), which is synchronized and locks it
- User hook blocks when the StreamingContext tries to remove() the Spark Streaming shutdown task, since it's synchronized per above
- Spark Shutdown Hook tries to execute the Spark Streaming shutdown task, but blocks on StreamingContext.stop()
I think this is actually not that critical, since it requires a pretty specific setup, and I think it can be worked around in many cases by integrating with Hadoop's shutdown hook mechanism like Spark does so that these happen serially.
I also think it's solvable in the code by not locking SparkShutdownHookManager in the 3 methods that are synchronized since these are really only protecting hooks. runAll() shouldn't hold the lock while executing hooks.