Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-4261

Jobs failing in the init stage will never cleanup

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.19.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Pre HADOOP-3150, if the job fails in the init stage, job.kill() was called. This used to make sure that the job was cleaned up w.r.t

      • staus set to KILLED/FAILED
      • job files from the system dir are deleted
      • closing of job history files
      • making jobtracker aware of this through jobTracker.finalizeJob()
      • cleaning up the data structures via JobInProgress.garbageCollect()

      Now if the job fails in the init stage, job.fail() is called which doesnt do the cleanup. HADOOP-3150 introduces cleanup tasks which are launched once the job completes i.e killed/failed/succeeded. Jobtracker will never consider this job for scheduling as the job will be in the PREP state forever.

        Attachments

        1. patch-4261.txt
          68 kB
          Amareshwari Sriramadasu
        2. patch-4261.txt
          72 kB
          Amareshwari Sriramadasu
        3. patch-4261.txt
          74 kB
          Amareshwari Sriramadasu
        4. patch-4261.txt
          76 kB
          Amareshwari Sriramadasu

          Issue Links

            Activity

              People

              • Assignee:
                amareshwari Amareshwari Sriramadasu
                Reporter:
                amar_kamat Amar Kamat
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: