Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-1144

Hadoop should allow a configurable percentage of failed map tasks before declaring a job failed.

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.12.0
    • 0.13.0
    • None
    • None

    Description

      In our environment it can occur that some map tasks will fail repeatedly because of corrupt input data, which sometimes is non-critical as long as the amount is limited. In this case it is annoying that the whole Hadoop job fails and cannot be restarted till the corrupt data are identified and eliminated from the input. It would be extremely helpful if the job configuration would allow to indicate how many map tasks are allowed to fail.

      Attachments

        1. HADOOP-1144_20070503_1.patch
          19 kB
          Arun Murthy

        Issue Links

          Activity

            People

              acmurthy Arun Murthy
              ckunz Christian Kunz
              Votes:
              1 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: