Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-37728

reading nested columns with ORC vectorized reader can cause ArrayIndexOutOfBoundsException

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.2.0
    • 3.2.1, 3.3.0
    • SQL
    • None

    Description

      When spark.sql.orc.enableNestedColumnVectorizedReader is set to true, reading nested columns of ORC files can cause ArrayIndexOutOfBoundsException. Here is a simple reproduction:

      1) create an ORC file which contains records of type Array<Array<String>>:

      ./bin/spark-shell 
      case class Item(record: Array[Array[String]])
      
      val data = new Array[Array[Array[String]]](100)
          for (i <- 0 to 99) {
            val temp = new Array[Array[String]](50)
            for (j <- 0 to 49) {
              temp(j) = new Array[String](1000)
              for (k <- 0 to 999) {
                temp(j)(k) = k.toString
              }
            }
            data(i) = temp
          }
      val rdd = spark.sparkContext.parallelize(data, 1)
      val df = rdd.map(x => Item(x)).toDF
      df.write.orc("file:///home/user_name/data") 

       

      2) read the orc with spark.sql.orc.enableNestedColumnVectorizedReader=true

      ./bin/spark-shell --conf spark.sql.orc.enableVectorizedReader=true --conf spark.sql.codegen.wholeStage=true --conf spark.sql.orc.enableNestedColumnVectorizedReader=true --conf spark.sql.orc.columnarReaderBatchSize=4096 
      val df = spark.read.orc("file:///home/user_name/data")
      df.show(100) 

       

      Then Spark threw ArrayIndexOutOfBoundsException:

      Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2455)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2404)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2403)
        at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2403)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1162)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1162)
        at scala.Option.foreach(Option.scala:407)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1162)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2643)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2585)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2574)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:940)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2227)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2248)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2267)
        at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:490)
        at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:443)
        at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:48)
        at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3833)
        at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2832)
        at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3824)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3822)
        at org.apache.spark.sql.Dataset.head(Dataset.scala:2832)
        at org.apache.spark.sql.Dataset.take(Dataset.scala:3053)
        at org.apache.spark.sql.Dataset.getRows(Dataset.scala:288)
        at org.apache.spark.sql.Dataset.showString(Dataset.scala:327)
        at org.apache.spark.sql.Dataset.show(Dataset.scala:807)
        at org.apache.spark.sql.Dataset.show(Dataset.scala:766)
        ... 47 elided
      Caused by: java.lang.ArrayIndexOutOfBoundsException: 4096
        at org.apache.spark.sql.execution.datasources.orc.OrcArrayColumnVector.getArray(OrcArrayColumnVector.java:53)
        at org.apache.spark.sql.vectorized.ColumnarArray.getArray(ColumnarArray.java:170)
        at org.apache.spark.sql.vectorized.ColumnarArray.getArray(ColumnarArray.java:31)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
        at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:363)
        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:890)
        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:890)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:136)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:507)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:510)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

       

      Attachments

        Activity

          People

            yangyimin Yimin Yang
            yangyimin Yimin Yang
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: