Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-2876

RDD.partitionBy loads entire partition into memory

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.0.1
    • 1.1.0
    • PySpark
    • None

    Description

      RDD.partitionBy fails due to an OOM in the PySpark daemon process when given a relatively large dataset. It seems that the use of BatchedSerializer(UNLIMITED_BATCH_SIZE) is suspect, most other RDD methods use self._jrdd_deserializer.

      y = x.keyBy(...)
      z = y.partitionBy(512) # fails
      z = y.repartition(512) # succeeds
      

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              NathanHowell Nathan Howell
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: