Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-7103

SparkContext.union crashed when some RDDs have no partitioner

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 1.3.0, 1.3.1
    • Fix Version/s: 1.3.2, 1.4.0
    • Component/s: Spark Core
    • Labels:
      None

      Description

      I encountered a bug where Spark crashes with the following stack trace:

      java.util.NoSuchElementException: None.get
      	at scala.None$.get(Option.scala:313)
      	at scala.None$.get(Option.scala:311)
      	at org.apache.spark.rdd.PartitionerAwareUnionRDD.getPartitions(PartitionerAwareUnionRDD.scala:69)
      

      Here's a minimal example that reproduces it on the Spark shell:

      val x = sc.parallelize(Seq(1->true,2->true,3->false)).partitionBy(new HashPartitioner(1))
      val y = sc.parallelize(Seq(1->true))
      sc.union(y, x).count() // crashes
      sc.union(x, y).count() // This works since the first RDD has a partitioner
      

      We had to resort to instantiating the UnionRDD directly to avoid the PartitionerAwareUnionRDD.

        Attachments

          Activity

            People

            • Assignee:
              stevenshe Steven She
              Reporter:
              stevenshe Steven She
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: