Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-40106

Task failure handlers should always run if the task failed

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.3.0
    • 3.4.0
    • Spark Core
    • None

    Description

      Today, if a task body succeeds, but a task completion listener fails, task failure listeners are not called – even tho the task has indeed failed at that point.

      If a completion listener fails, and failure listeners were not previously invoked, we should invoke them before running the remaining completion listeners.

      Such a change would increase the utility of task listeners, especially ones intended to assist with task cleanup.

      To give one arbitrary example, code like this appears at several places in the code (taken from executeTask method of FileFormatWriter.scala):

          try {
            Utils.tryWithSafeFinallyAndFailureCallbacks(block = {
              // Execute the task to write rows out and commit the task.
              dataWriter.writeWithIterator(iterator)
              dataWriter.commit()
            })(catchBlock = {
              // If there is an error, abort the task
              dataWriter.abort()
              logError(s"Job $jobId aborted.")
            }, finallyBlock = {
              dataWriter.close()
            })
          } catch {
            case e: FetchFailedException =>
              throw e
            case f: FileAlreadyExistsException if SQLConf.get.fastFailFileFormatOutput =>
              // If any output file to write already exists, it does not make sense to re-run this task.
              // We throw the exception and let Executor throw ExceptionFailure to abort the job.
              throw new TaskOutputFileAlreadyExistException(f)
            case t: Throwable =>
              throw QueryExecutionErrors.taskFailedWhileWritingRowsError(t)
          }

      If failure listeners were reliably called, the above idiom could potentially be factored out as two failure listeners plus a completion listener, and reused rather than duplicated.

      Attachments

        Activity

          People

            ryan.johnson@databricks.com Ryan Johnson
            ryan.johnson@databricks.com Ryan Johnson
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: