Details
-
Wish
-
Status: Resolved
-
Minor
-
Resolution: Incomplete
-
2.4.0
-
None
Description
In Structured Streaming, we often need to cancel a Spark job in order to close the stream. SparkContext does not (as far as I can tell) provide a runJob handle which cleanly signals when a job was cancelled; it simply throws a generic SparkException. So we're forced to awkwardly parse this SparkException in order to determine whether the job failed because of a cancellation (which we expect and want to swallow) or another error (which we want to propagate).