Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-2711

Create a ShuffleMemoryManager that allocates across spilling collections in the same task

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • None
    • 1.1.0
    • None
    • None

    Description

      Right now if there are two ExternalAppendOnlyMaps, they don't compete correctly for memory. This can happen e.g. in a task that is both reducing data from its parent RDD and writing it out to files for a future shuffle, for instance if you do rdd.groupByKey(...).map(...).groupByKey(...) (another key).

      Attachments

        Issue Links

          Activity

            People

              matei Matei Alexandru Zaharia
              matei Matei Alexandru Zaharia
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: