Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-48792

INSERT with partial column list to table with char/varchar crashes

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.5.1
    • 4.0.0
    • SQL

    Description

      ```
      24/07/03 16:29:01 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
      org.apache.spark.SparkException: [INTERNAL_ERROR] Unsupported data type VarcharType(64). SQLSTATE: XX000
      at org.apache.spark.SparkException$.internalError(SparkException.scala:92)
      at org.apache.spark.SparkException$.internalError(SparkException.scala:96)
      at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.makeWriter(ParquetWriteSupport.scala:266)
      at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.$anonfun$init$2(ParquetWriteSupport.scala:111)
      at scala.collection.immutable.List.map(List.scala:247)
      at scala.collection.immutable.List.map(List.scala:79)
      at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.init(ParquetWriteSupport.scala:111)
      at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:478)
      at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:422)
      at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:411)
      at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:36)
      at org.apache.spark.sql.execution.datasources.parquet.ParquetUtils$$anon$1.newInstance(ParquetUtils.scala:500)
      at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:180)
      at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:165)
      at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:391)
      at org.apache.spark.sql.execution.datasources.WriteFilesExec.$anonfun$doExecuteWrite$1(WriteFiles.scala:107)
      at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:896)
      at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:896)
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:369)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:333)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
      at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:171)
      at org.apache.spark.scheduler.Task.run(Task.scala:146)
      at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$5(Executor.scala:640)
      at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
      at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
      at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:99)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:643)
      at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
      at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
      at java.base/java.lang.Thread.run(Thread.java:840)
      ```

      Attachments

        Issue Links

          Activity

            People

              yao Kent Yao
              yao Kent Yao
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: