• Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Won't Fix
    • Affects Version/s: 0.8.0, 0.9.0, 1.0.0, 1.1.0
    • Fix Version/s: None
    • Component/s: Spark Core
    • Labels:


      The sortByKey() method is listed as a transformation, not an action, in the documentation. But it launches a cluster job regardless.

      Some discussion on the mailing list suggested that this is a problem with the rdd.count() call inside Partitioner.scala's rangeBounds method.

      Josh Rosen suggests that rangeBounds should be made into a lazy variable:

      I wonder whether making RangePartitoner .rangeBounds into a lazy val would fix this ( We'd need to make sure that rangeBounds() is never called before an action is performed. This could be tricky because it's called in the RangePartitioner.equals() method. Maybe it's sufficient to just compare the number of partitions, the ids of the RDDs used to create the RangePartitioner, and the sort ordering. This still supports the case where I range-partition one RDD and pass the same partitioner to a different RDD. It breaks support for the case where two range partitioners created on different RDDs happened to have the same rangeBounds(), but it seems unlikely that this would really harm performance since it's probably unlikely that the range partitioners are equal by chance.

      Can we please make this happen? I'll send a PR on GitHub to start the discussion and testing.


          Issue Links



              • Assignee:
                eje Erik Erlandson
                ash211 Andrew Ash
              • Votes:
                2 Vote for this issue
                25 Start watching this issue


                • Created: