Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-26224

Results in stackOverFlowError when trying to add 3000 new columns using withColumn function of dataframe.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 2.3.0
    • 3.0.0
    • SQL
    • None
    • On macbook, used Intellij editor. Ran the above sample code as unit test.

    Description

      Reproduction step:

      Run this sample code on your laptop. I am trying to add 3000 new columns to a base dataframe with 1 column.

       

       

      import spark.implicits._
      
      val newColumnsToBeAdded : Seq[StructField] = for (i <- 1 to 3000) yield new StructField("field_" + i, DataTypes.LongType)
      
      val baseDataFrame: DataFrame = Seq((1)).toDF("employee_id")
      
      val result = newColumnsToBeAdded.foldLeft(baseDataFrame)((df, newColumn) => df.withColumn(newColumn.name, lit(0)))
      
      result.show(false)
       
      

      Ends up with following stacktrace:

      java.lang.StackOverflowError
      at scala.collection.generic.GenTraversableFactory$GenericCanBuildFrom.apply(GenTraversableFactory.scala:57)
      at scala.collection.generic.GenTraversableFactory$GenericCanBuildFrom.apply(GenTraversableFactory.scala:52)
      at scala.collection.TraversableLike$class.builder$1(TraversableLike.scala:229)
      at scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
      at scala.collection.immutable.List.map(List.scala:296)
      at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:333)
      at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
      at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
      at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:272)
      at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
      at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
      at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
      at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
      at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
      at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:272)
      at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
      at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:272)
      at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
      at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
      at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
      at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:272)

      Attachments

        Activity

          People

            mgaido Marco Gaido
            dotsering Dorjee Tsering
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: