Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
0.16.0
-
None
-
None
Description
Currently reduce tasks with more than MAX_FAILED_UNIQUE_FETCHES (= 5 hard-coded) failures to fetch output from different mappers will fail (I believe, introduced in HADOOP-1158)
This gives us some problems with longer running jobs with a large number of mappers in multiple waves:
Otherwise problem-less reduce tasks fail because of too many fetch failures due to resource contention, and new reduce tasks have to fetch all data from the already successfully executed mappers, introducing a lot of additional IO overhead. Also, the job will fail when the same reducer exhausts the maximum number of attempts.
The limit should be a function of number of mappers and/or waves of mappers, and should be more conservative (e.g. no need to let them fail when there are enough slots to start speculatively executed reducers and speculative execution is enabled). Also, we might consider to not count such a restart against the number of attempts.
Attachments
Issue Links
- is part of
-
HADOOP-2247 Mappers fail easily due to repeated failures
- Closed
- is related to
-
HADOOP-2247 Mappers fail easily due to repeated failures
- Closed