Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-26516

hive with parquet table insert exeception 【Unsupported primitive data type: VOID】

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 2.3.7
    • None
    • Hive, Parquet, SQL
    • None

    Description

      reproduce like below:

       

      drop table ty;

      create table ty(id int);
      insert into table ty values(null);

      drop table ty1;
      create table ty1(id int) stored as parquet;
      insert into table ty1  
      select 
      case when 1=2 then ty.id  else null end  as  id
      from ty;

       

       

       

      exception:

       

      Driver stacktrace:
          at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2023)
          at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1972)
          at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1971)
          at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
          at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
          at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
          at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1971)
          at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:950)
          at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:950)
          at scala.Option.foreach(Option.scala:407)
          at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:950)
          at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2203)
          at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2152)
          at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2141)
          at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
      Caused by: java.lang.RuntimeException: Error processing row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"id":null}
          at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:149)
          at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
          at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
          at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85)
          at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:43)
          at scala.collection.Iterator.foreach(Iterator.scala:941)
          at scala.collection.Iterator.foreach$(Iterator.scala:941)
          at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
          at org.apache.spark.rdd.AsyncRDDActions.$anonfun$foreachAsync$2(AsyncRDDActions.scala:127)
          at org.apache.spark.rdd.AsyncRDDActions.$anonfun$foreachAsync$2$adapted(AsyncRDDActions.scala:127)
          at org.apache.spark.SparkContext.$anonfun$submitJob$1(SparkContext.scala:2242)
          at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
          at org.apache.spark.scheduler.Task.run(Task.scala:127)
          at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
          at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
          at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
          at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"id":null}
          at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:562)
          at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:136)

       

       

      --复现场景
      drop table ty;
      create table ty(id int,name int);
      insert into table ty values(1,null);

      drop table ty1;
      create table ty1(id int) stored as parquet;
      insert into table ty1  
      select 
      case when 1=2 then ty.id  else null end  as  id
      from ty;

       

      Driver stacktrace:
          at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2023)
          at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1972)
          at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1971)
          at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
          at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
          at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
          at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1971)
          at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:950)
          at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:950)
          at scala.Option.foreach(Option.scala:407)
          at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:950)
          at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2203)
          at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2152)
          at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2141)
          at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
      Caused by: java.lang.RuntimeException: Error processing row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"id":1,"name":null}
          at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:149)
          at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
          at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
          at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85)
          at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:43)
          at scala.collection.Iterator.foreach(Iterator.scala:941)
          at scala.collection.Iterator.foreach$(Iterator.scala:941)
          at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
          at org.apache.spark.rdd.AsyncRDDActions.$anonfun$foreachAsync$2(AsyncRDDActions.scala:127)
          at org.apache.spark.rdd.AsyncRDDActions.$anonfun$foreachAsync$2$adapted(AsyncRDDActions.scala:127)
          at org.apache.spark.SparkContext.$anonfun$submitJob$1(SparkContext.scala:2242)
          at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
          at org.apache.spark.scheduler.Task.run(Task.scala:127)
          at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
          at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
          at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
          at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"id":1,"name":null}
          at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:562)
          at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:136)
          ... 18 more
      Caused by: java.lang.RuntimeException: Parquet record is malformed: Unsupported primitive data type: VOID
          at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:87)
          at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
          at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
          at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
          at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
          at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
          at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:136)
          at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:149)
          at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:762)
          at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
          at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
          at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
          at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130)
          at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:148)
          at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:547)
          ... 19 more
      Caused by: java.lang.IllegalArgumentException: Unsupported primitive data type: VOID
          at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.createWriter(DataWritableWriter.java:140)
          at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.access$000(DataWritableWriter.java:61)
          at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$GroupDataWriter.<init>(DataWritableWriter.java:189)
          at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$MessageDataWriter.<init>(DataWritableWriter.java:213)
          at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.createMessageWriter(DataWritableWriter.java:96)
          at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:83)
          ... 33 more

      Attachments

        Activity

          People

            Unassigned Unassigned
            liukailong123 lkl
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: