Details
Description
We are running hive jobs with a DFS quota limitation per job(3TB). If a job hits DFS quota limitation, the task that hit it will fail and there will be a few task reties before the job actually fails. The retry is not very helpful because the job will always fail anyway. In some worse cases, we have a job which has a single reduce task writing more than 3TB to HDFS over 20 hours, the reduce task exceeds the quota limitation and retries 4 times until the job fails in the end thus consuming a lot of unnecessary resource. This ticket aims at providing the feature to let a job fail fast when it writes too much data to the DFS and exceeds the DFS quota limitation. The fast fail feature is introduced in MAPREDUCE-7022 and MAPREDUCE-6489 .
Attachments
Attachments
Issue Links
- relates to
-
TEZ-4110 Make Tez fail fast when DFS quota is exceeded
- Resolved