Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
1.2.0
-
None
-
None
-
Linux
Description
When reading an input text file, the Job Tracker seems to assign the first two FileSplits to a single Mapper Task, then assigns an EMPTY FileSplit (end of file) to a Mapper Task, which finishes instantaneously. This can affect job balance, since one map job is now twice as big as the others.
In "src/mapred/org/apache/hadoop/mapred/LineRecordReader.java", line 110, there is a comment about skipping the first line of the input file by default, since "next()" reads two lines anyway. This was not the behavior in 0.20.2, which did not have this problem.
Seems perhaps related to :
"HADOOP-4010. Change semantics for LineRecordReader to read an additional
line per split- rather than moving back one character in the stream- to
work with splittable compression codecs. (Abdul Qadeer via cdouglas)"
It seems this was not implemented properly and is leading to the issue described above in the situation that the input file is text.