Details
Description
Spark tasks respond to cancellation by checking TaskContext.isInterrupted(), but this check is missing on a few critical paths used in Spark SQL, including FileScanRDD, JDBCRDD, and UnsafeSorter-based sorts. This can cause interrupted / cancelled tasks to continue running and become zombies.
Here's an example: first, create a giant text file. In my case, I just concatenated /usr/share/dict/words a bunch of times to produce a 2.75 gig file. Then, run a really slow query over that file and try to cancel it:
spark.read.text("/tmp/words").selectExpr("value + value + value").collect()
This will sit and churn at 100% CPU for a minute or two because the task isn't checking the interrupted flag.
The solution here is to add InterruptedIterator-style checks to a few locations where they're currently missing in Spark SQL.