Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
None
-
None
-
None
-
Reviewed
Description
Pre HADOOP-3150, if the job fails in the init stage, job.kill() was called. This used to make sure that the job was cleaned up w.r.t
- staus set to KILLED/FAILED
- job files from the system dir are deleted
- closing of job history files
- making jobtracker aware of this through jobTracker.finalizeJob()
- cleaning up the data structures via JobInProgress.garbageCollect()
Now if the job fails in the init stage, job.fail() is called which doesnt do the cleanup. HADOOP-3150 introduces cleanup tasks which are launched once the job completes i.e killed/failed/succeeded. Jobtracker will never consider this job for scheduling as the job will be in the PREP state forever.
Attachments
Attachments
Issue Links
- blocks
-
HADOOP-4018 limit memory usage in jobtracker
- Closed