Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-1065

PySpark runs out of memory with large broadcast variables

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 0.7.3, 0.8.1, 0.9.0
    • 1.1.0
    • PySpark
    • None

    Description

      PySpark's driver components may run out of memory when broadcasting large variables (say 1 gigabyte).

      Because PySpark's broadcast is implemented on top of Java Spark's broadcast by broadcasting a pickled Python as a byte array, we may be retaining multiple copies of the large object: a pickled copy in the JVM and a deserialized copy in the Python driver.

      The problem could also be due to memory requirements during pickling.

      PySpark is also affected by broadcast variables not being garbage collected. Adding an unpersist() method to broadcast variables may fix this: https://github.com/apache/incubator-spark/pull/543.

      As a first step to fixing this, we should write a failing test to reproduce the error.

      This was discovered by sandy: "trouble with broadcast variables on pyspark".

      Attachments

        Activity

          People

            davies Davies Liu
            joshrosen Josh Rosen
            Votes:
            2 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: