Description
I understand that the yarn-cluster mode is designed for fire-and-forget model; therefore, terminating the yarn client doesn't kill AM.
However, it is very common that users submit Spark jobs via job scheduler (e.g. Apache Oozie) or remote job server (e.g. Netflix Genie) where it is expected that killing the yarn client will terminate AM.
It is true that the yarn-client mode can be used in such cases. But then, the yarn client sometimes needs lots of heap memory for big jobs if it runs in the yarn-client mode. In fact, the yarn-cluster mode is ideal for big jobs because AM can be given arbitrary heap memory unlike the yarn client. So it would be very useful to make it possible to kill AM even in the yarn-cluster mode.
In addition, Spark jobs often become zombie jobs if users ctrl-c them as soon as they're accepted (but not yet running). Although they're eventually shutdown after AM timeout, it would be nice if AM could immediately get killed in such cases too.
Attachments
Issue Links
- relates to
-
SPARK-3591 Provide "fire and forget" option for YARN cluster mode
- Resolved
- links to