Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.15.3
-
None
-
None
-
Reviewed
Description
every once in a while - we encounter corrupted text files (corrupted at source prior to copying into hadoop). inevitably - some of the data looks like a really really long line and hadoop trips over trying to stuff it into an in memory object and gets outofmem error. Code looks same way in trunk as well ..
so looking for an option to the textinputformat (and like) to ignore long lines. ideally - we would just skip errant lines above a certain size limit.