Description
HIVE-19053 and HIVE-19733 added handling of InterruptedException to RemoteSparkJobStatus#getSparkJobInfo and RemoteSparkJobStatus#getSparkStagesInfo. Now, these methods catch InterruptedException and wrap the exception in a HiveException and then throw the new HiveException.
This new HiveException is then caught in RemoteSparkJobMonitor#startMonitor which then looks for exceptions that match the condition:
if (e instanceof InterruptedException || (e instanceof HiveException && e.getCause() instanceof InterruptedException))
If this condition is met (in this case it is), the exception will again be wrapped in another HiveException and is thrown again. So the final exception is a HiveException that wraps a HiveException that wraps an InterruptedException.
The double nesting of hive exception causes the logic in SparkTask#setSparkException to break, and doesn't cause killJob to get triggered.
This causes interrupted Hive queries to not kill their corresponding Spark jobs.
Attachments
Attachments
Issue Links
- links to