Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-1877

it is not possible to make a job fail without retries

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Closed
    • Minor
    • Resolution: Won't Fix
    • None
    • None
    • None
    • None
    • all

    Description

      If a job task fails due to no transient reasons (For example, configuration for the job is not correct) Hadoop will retry the failed tasks as many times as retries have been configured. And it will fail again and again.

      There should be an JobKillException thrown by configure, map and reduce methods that would make Hadoop not to retry the task and kill the job.

      Attachments

        Activity

          People

            Unassigned Unassigned
            tucu00 Alejandro Abdelnur
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: