Details
-
New Feature
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.2.0
-
None
-
None
-
Reviewed
-
Introduced record skipping where tasks fail on certain records. (org.apache.hadoop.mapred.SkipBadRecords)
Description
MapReduce should skip records that throw exceptions.
If the exception is thrown under RecordReader.next() then RecordReader implementations should automatically skip to the start of a subsequent record.
Exceptions in map and reduce implementations can simply be logged, unless they happen under RecordWriter.write(). Cancelling partial output could be hard. So such output errors will still result in task failure.
This behaviour should be optional, but enabled by default. A count of errors per task and job should be maintained and displayed in the web ui. Perhaps if some percentage of records (>50%?) result in exceptions then the task should fail. This would stop jobs early that are misconfigured or have buggy code.
Thoughts?
Attachments
Attachments
Issue Links
- is depended upon by
-
HADOOP-3828 Write skipped records' bytes to DFS
- Closed
-
HADOOP-3829 Narrown down skipped records based on user acceptable value
- Closed
- relates to
-
HADOOP-3954 Skip records enabled as default.
- Closed