Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-142

failed tasks should be rescheduled on different hosts after other jobs

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.1.1
    • 0.2.0
    • None
    • None

    Description

      Currently when tasks fail, they are usually rerun immediately on the same host. This causes problems in a couple of ways.
      1.The task is more likely to fail on the same host.
      2.If there is cleanup code (such as clearing pendingCreates) it does not always run immediately, leading to cascading failures.

      For a first pass, I propose that when a task fails, we start the scan for new tasks to launch at the following task of the same type (within that job). So if maps[99] fails, when we are looking to assign new map tasks from this job, we scan like maps[100]...maps[N], maps[0]..,maps[99].

      A more involved change would avoid running tasks on nodes where it has failed before. This is a little tricky, because you don't want to prevent re-excution of tasks on 1 node clusters and the job tracker needs to schedule one task tracker at a time.

      Attachments

        1. no-repeat-failures.patch
          8 kB
          Owen O'Malley

        Activity

          People

            omalley Owen O'Malley
            omalley Owen O'Malley
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: