Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-15062

Show on DataFrame causes OutOfMemoryError, NegativeArraySizeException or segfault

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Blocker
    • Resolution: Fixed
    • None
    • 2.0.0
    • SQL
    • None
    • spark-2.0.0-SNAPSHOT using commit hash 90787de864b58a1079c23e6581381ca8ffe7685f and Java 1.7.0_67

    Description

      scala> val dfComplicated = sc.parallelize(List((Map("1" -> "a"), List("b", "c")), (Map("2" -> "b"), List("d", "e")))).toDF
      ...
      dfComplicated: org.apache.spark.sql.DataFrame = [_1: map<string,string>, _2: array<string>]
      
      scala> dfComplicated.show
      java.lang.OutOfMemoryError: Java heap space
        at org.apache.spark.unsafe.types.UTF8String.getBytes(UTF8String.java:229)
        at org.apache.spark.unsafe.types.UTF8String.toString(UTF8String.java:821)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown Source)
        at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.fromRow(ExpressionEncoder.scala:241)
        at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1$$anonfun$apply$13.apply(Dataset.scala:2121)
        at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1$$anonfun$apply$13.apply(Dataset.scala:2121)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
        at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2121)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:54)
        at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2408)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2120)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2127)
        at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1861)
        at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1860)
        at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2438)
        at org.apache.spark.sql.Dataset.head(Dataset.scala:1860)
        at org.apache.spark.sql.Dataset.take(Dataset.scala:2077)
        at org.apache.spark.sql.Dataset.showString(Dataset.scala:238)
        at org.apache.spark.sql.Dataset.show(Dataset.scala:529)
        at org.apache.spark.sql.Dataset.show(Dataset.scala:489)
        at org.apache.spark.sql.Dataset.show(Dataset.scala:498)
        ... 6 elided
      
      scala>
      

      By increasing memory to 8G one will instead get a NegativeArraySizeException or a segfault.

      See here for original discussion:
      http://apache-spark-developers-list.1001551.n3.nabble.com/spark-2-segfault-td17381.html

      Attachments

        Activity

          People

            mengbo Bo Meng
            koert koert kuipers
            Votes:
            0 Vote for this issue
            Watchers:
            9 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: