Description
https://github.com/apache/spark/pull/28671 introduced changes that changed the way cleanup is done in SparkExecuteStatementOperation. In cancel(), cleanup (killing jobs) used to be done after setting state to CANCELED. Now, the order is reversed. Jobs are killed first, causing exception to be thrown inside execute(), so the status of the operation becomes ERROR before being set to CANCELED.
Attachments
Attachments
Issue Links
- is caused by
-
SPARK-31859 Thriftserver with spark.sql.datetime.java8API.enabled=true
- Resolved
- links to