Speculation can often result in a CommitDeniedException, but ideally this shouldn't result in the job failing. So changes were made along with
SPARK-8167 to ensure that the CommitDeniedException is caught and given a failure reason that doesn't increment the failure count.
However, I'm still noticing that this exception is causing jobs to fail using the 1.6.1 release version.
16/04/04 11:36:02 ERROR InsertIntoHadoopFsRelation: Aborting job.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 18 in stage 315.0 failed 8 times, most recent failure: Lost task 18.8 in stage 315.0 (TID 100793, qaphdd099.quantium.com.au.local): org.apache.spark.SparkException: Task failed while writing rows.
Caused by: java.lang.RuntimeException: Failed to commit task
... 8 more
Caused by: org.apache.spark.executor.CommitDeniedException: attempt_201604041136_0315_m_000018_8: Not committed because the driver did not authorize commit
... 9 more
It seems to me that the CommitDeniedException gets wrapped into a RuntimeException at WriterContainer.scala#L286 and then into a SparkException at InsertIntoHadoopFsRelation.scala#L154 which results in it not being able to be handled properly at Executor.scala#L290
The solution might be that this catch block should type match on the inner-most cause of an error?