Details
-
Bug
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
0.17.0
-
None
-
None
Description
The net effect of this is that, with a long-running TaskTracker, it takes long long time for ReduceTasks on that TaskTracker to fetch map outputs - TaskTracker does that for all reduce tasks in TaskTracker .runningJobs, including those stale ReduceTasks. There is a 5-second delay between 2 requests, which makes it a long time for a running reducetask to get the map output locations, when there are tens of stale ReduceTasks. Of course this also blows up the memory but that is not a too big problem at its rate.
I've verified the bug by adding an html table for TaskTracker.runningJobs on TaskTracker http interface, on a 2-node machine, with a single mapper single reducer job, in which mapper succeeds and reducer fails. I can still see the ReduceTask in TaskTracker.runningJobs, while it's not in the first 2 tables (TaskTracker.tasks and TaskTracker.runningTasks).
Details:
TaskRunner.run() will call TaskTracker.reportTaskFinished() when the task fails,
which calls TaskTracker.TaskInProgress.taskFinished,
which calls TaskTracker.TaskInProgress.cleanup(),
which calls TaskTracker.tasks.remove(taskId).
In short, it remove a failed task from TaskTracker.tasks, but not TaskTracker.runningJobs.
Then the failure is reported to JobTracker.
JobTracker.heartbeat will call processHeartbeat,
which calls updateTaskStatuses,
which calls tip.getJob().updateTaskStatus,
which calls JobInProgress.failedTask,
which calls JobTracker.markCompletedTaskAttempt,
which puts the task to trackerToMarkedTasksMap,
and then JobTracker.heartbeat will call removeMarkedTasks,
which call removeTaskEntry,
which removes it from trackerToTaskMap.
JobTracker.heartbeat will also call JobTracker.getTasksToKill,
which reads from trackerToTaskMap for <tracker, task> pairs,
and ask tracker to KILL the task or job of the task.
In the case there is only one task for a specific job on a specific tracker
and that task failed (NOTE: and that task is not the last failed try of the
job - otherwise JobTracker.getTasksToKill will pick it up before
removeMarkedTasks comes in and remove it from trackerToTaskMap), the task
tracker will not receive the KILL task or KILL job message from the JobTracker.
As a result, the task will remain in TaskTracker.runningJobs forever.
Solution:
Remove the task from TaskTracker.runningJobs at the same time when we remove it from TaskTracker.tasks.
Attachments
Attachments
Issue Links
- incorporates
-
HADOOP-3713 broken symlinks in jobcache when local tasks are done but job is in progress
- Closed
- relates to
-
HADOOP-3386 the job directory of a failed task may stay forever on a tasktracker node
- Resolved