When an application is submitted using spark-submit in cluster mode using yarn, the spark application continues to run on the cluster, even if spark-submit itself has been requested to shutdown (Ctrl-C/SIGTERM/etc.)
While there is code inside org.apache.spark.deploy.yarn.Client.scala that would lead you to believe the spark application on the cluster will shut down, this code is not currently reachable.
Example of behavior:
<Ctrl-C> or kill -15 <pid>
spark-submit itself dies
job can still be found running on the cluster
When spark-submit is in monitoring a yarn app and spark-submit itself is requested to shutdown (SIGTERM, HUP,etc.), it should call yarnClient.killApplication(appId) so that the actual spark application running on the cluster is killed.
There is already a shutdown hook registered which cleans up temp files. Could this be extended to call yarnClient.killApplication?
I believe the default behavior should be to request yarn to kill the application, however I can imagine use cases where you may still want it to run. So facilitate these use cases, an option should be provided to skip this hook.