Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-23822

Improve error message for Parquet schema mismatches

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.3.0
    • 2.3.1, 2.4.0
    • SQL
    • None

    Description

      If a user attempts to read Parquet files with mismatched schemas and schema merging is disabled then this may result in a very confusing UnsupportedOperationException and ParquetDecodingException errors from Parquet.

      e.g.

      Seq(("bcd")).toDF("a").coalesce(1).write.mode("overwrite").parquet(s"$path/")
      Seq((1)).toDF("a").coalesce(1).write.mode("append").parquet(s"$path/")
      
      spark.read.parquet(s"$path/").collect()
      

      Would result in

      Caused by: java.lang.UnsupportedOperationException: Unimplemented type: IntegerType
      
        at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBinaryBatch(VectorizedColumnReader.java:474)
      
        at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:214)
      
        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:261)
      
        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:159)
      
        at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
      
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:106)
      
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
      
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:106)
      
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch$(Unknown Source)
      
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
      
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
      
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$11$$anon$1.hasNext(WholeStageCodegenExec.scala:617)
      
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253)
      
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
      
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
      
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
      
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
      
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
      
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
      
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
      
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
      
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      
        at java.lang.Thread.run(Thread.java:748)
      

       

      Attachments

        Activity

          People

            yuchen.huo Yuchen Huo
            yuchen.huo Yuchen Huo
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: