Details
-
New Feature
-
Status: Closed
-
Minor
-
Resolution: Won't Fix
-
None
-
None
-
None
-
None
-
all
Description
If a job task fails due to no transient reasons (For example, configuration for the job is not correct) Hadoop will retry the failed tasks as many times as retries have been configured. And it will fail again and again.
There should be an JobKillException thrown by configure, map and reduce methods that would make Hadoop not to retry the task and kill the job.