Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Not A Problem
-
2.0.1
-
None
Description
When reading LZO files using sc.textFile it miss a few files from time to time.
Sample:
val Data = sc.textFile(Files)
listFiles += Data.count()
Considering that Files is a HDFS directory containing LZO files. If executed for example a 1000 times it gets different results a few times.
Now if you use newAPIHadoopFile to force it to use com.hadoop.mapreduce.LzoTextInputFormat it works perfectly, shows the same results in all executions.
Sample:
val Data = sc.newAPIHadoopFile(Files,
classOf[com.hadoop.mapreduce.LzoTextInputFormat],
classOf[org.apache.hadoop.io.LongWritable],
classOf[org.apache.hadoop.io.Text]).map(_._2.toString)
listFiles += Data.count()
Looking at Spark code it looks like it use TextInputFormat by default but is not using com.hadoop.mapreduce.LzoTextInputFormat when hadoop-lzo is installed.