Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-3461

Support external groupByKey using repartitionAndSortWithinPartitions

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Implemented
    • None
    • 1.6.0
    • Spark Core
    • None

    Description

      Given that we have SPARK-2978, it seems like we could support an external group by operator pretty easily. We'd just have to wrap the existing iterator exposed by SPARK-2978 with a lookahead iterator that detects the group boundaries. Also, we'd have to override the cache() operator to cache the parent RDD so that if this object is cached it doesn't wind through the iterator.

      I haven't totally followed all the sort-shuffle internals, but just given the stated semantics of SPARK-2978 it seems like this would be possible.

      It would be really nice to externalize this because many beginner users write jobs in terms of groupByKey.

      Attachments

        Activity

          People

            rxin Reynold Xin
            pwendell Patrick Wendell
            Votes:
            1 Vote for this issue
            Watchers:
            12 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: