Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-32470

Remove task result size check for shuffle map stage

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 2.4.6, 3.0.0, 3.1.0
    • 3.1.0
    • Spark Core
    • None

    Description

      The task result of a shuffle map stage is not the query result but instead is only map status and metrics accumulator updates. Aside from the metrics that can vary in size, the total task result size solely depends on the number of tasks. And the number of tasks can get large regardless of the stage's output size. For example, the number of tasks generated by `CartesianProduct` is square of "spark.sql.shuffle.partitions", say if "spark.sql.shuffle.partitions" is set to 200, you get 40,000 tasks, if set to 500, you get 250,000 tasks, which can easily error on the default limit of `spark.driver.maxResultSize`:

       

      org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 66496 tasks (4.0 GiB) is bigger than spark.driver.maxResultSize (4.0 GiB)
      

       

      However, map status and accumulator updates are used by the driver to update the overall map stats and metrics of the query, and they are not cached on the driver, so they won't cause catastrophic memory issues on the driver. So we should remove this check for shuffle map stage tasks.

      Attachments

        Issue Links

          Activity

            People

              maryannxue Wei Xue
              maryannxue Wei Xue
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: