Affects Version/s: None
Fix Version/s: 0.2.0
This is an odd case. Main cause will be programmer error but I suppose it could happen during normal processing. Whichever, would be grand if hadoop was better able to deal.
My map task completed 'successfully' but because I had started threads inside in my task that were not set to be of daemon type that under certain circumstances were left running, my child stuck around after reporting 'done' – the JVM wouldn't go down while non-daemon threads still running. After ten minutes, TT steps in, kills the child and does cleanup of the successful output. Because JT has been told the task completed successfully, reducers keep showing up looking for the output now removed – until the job fails.
Below is illustration of the problem using log output:
060501 090401 task_0001_m_000798_0 0.99491096% adding http://www.score.umd.edu/a
um.jpg 24891 image/jpeg
060501 090401 task_0001_m_000798_0 1.0% adding http://www.score.umd.edu/album.jp
060501 090401 Task task_0001_m_000798_0 is done.
060501 091410 task_0001_m_000798_0: Task failed to report status for 608 seconds
060501 091410 Calling cleanup because was killed or FAILED task_0001_m_000798_0
060501 091410 task_0001_m_000798_0 done; removing files.
060501 091422 SEVERE Can't open map output:/1/hadoop/tmp/task_0001_m_000798_0/pa
and on and on.