Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-18414

sc.textFile doesn't seem to use LzoTextInputFormat when hadoop-lzo is installed

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Not A Problem
    • 2.0.1
    • None
    • Input/Output

    Description

      When reading LZO files using sc.textFile it miss a few files from time to time.

      Sample:
      val Data = sc.textFile(Files)
      listFiles += Data.count()

      Considering that Files is a HDFS directory containing LZO files. If executed for example a 1000 times it gets different results a few times.

      Now if you use newAPIHadoopFile to force it to use com.hadoop.mapreduce.LzoTextInputFormat it works perfectly, shows the same results in all executions.

      Sample:

      val Data = sc.newAPIHadoopFile(Files,
      classOf[com.hadoop.mapreduce.LzoTextInputFormat],
      classOf[org.apache.hadoop.io.LongWritable],
      classOf[org.apache.hadoop.io.Text]).map(_._2.toString)
      listFiles += Data.count()

      Looking at Spark code it looks like it use TextInputFormat by default but is not using com.hadoop.mapreduce.LzoTextInputFormat when hadoop-lzo is installed.

      https://github.com/apache/spark/blob/v2.0.1/core/src/main/scala/org/apache/spark/SparkContext.scala#L795-L801

      Attachments

        Activity

          People

            Unassigned Unassigned
            renanvice@gmail.com Renan Vicente Gomes da Silva
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: