Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-12947

Spark with Swift throws EOFException when reading parquet file

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Incomplete
    • 1.6.0
    • None
    • SQL
    • Spark 1.6.0-SNAPSHOT

    Description

      I'm using Swift as underlying storage for my spark jobs but it sometimes throws EOFExceptions for some parts of the data.

      Another user has hit the same issue: http://stackoverflow.com/questions/32400137/spark-swift-integration-parquet

      Code to reproduce:
      ```
      val features = sqlContext.read.parquet(featurePath)
      // Flatten the features into the array exploded
      val exploded = features.select(explode(features("features"))).toDF("features")
      val kmeans = new KMeans()
      .setK(k)
      .setFeaturesCol("features")
      .setPredictionCol("prediction")
      val model = kmeans.fit(exploded)
      ```
      val features is a dataframe with 2 columns:
      image: String, features: Array[Vector]

      val exploded is a dataframe with a single column:
      features: Vector

      The following exception is shown when running takeSample on a large dataset saved as parquet file (~1+GB):
      java.io.EOFException
      at java.io.DataInputStream.readFully(DataInputStream.java:197)
      at java.io.DataInputStream.readFully(DataInputStream.java:169)
      at org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:756)
      at org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:494)
      at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
      at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:208)
      at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:201)
      at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.hasNext(SqlNewHadoopRDD.scala:168)
      at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:350)
      at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      at org.apache.spark.rdd.RDD$$anonfun$zip$1$$anonfun$apply$30$$anon$1.hasNext(RDD.scala:827)
      at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1563)
      at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1119)
      at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1119)
      at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1840)
      at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1840)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      at org.apache.spark.scheduler.Task.run(Task.scala:88)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)

      Attachments

        Activity

          People

            Unassigned Unassigned
            samos123 Sam Stoelinga
            Votes:
            3 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: