Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-751

Consolidate shuffle files

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 0.8.1
    • None
    • None

    Description

      Right now on each machine, we create M * R temporary files for shuffle, where M = number of map tasks, R = number of reduce tasks.

      This can be pretty high when there are lots of mappers and reducers (e.g. 1k map * 1k reduce = 1 million files for a single shuffle). The high number can cripple the file system and significantly slow the system down.

      We should cut this number down to O(R) instead of O(M*R).

      Attachments

        Activity

          People

            jason.dai Jason Dai
            rxin Reynold Xin
            Votes:
            2 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: