Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-32112

Easier way to repartition/coalesce DataFrames based on the number of parallel tasks that Spark can process at the same time

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.0.0
    • None
    • Spark Core
    • None

    Description

      Repartition/coalesce is very important to optimize Spark application's performance, however, a lot of users are struggling with determining the number of partitions.
      This issue is to add a easier way to repartition/coalesce DataFrames based on the number of parallel tasks that Spark can process at the same time.

      It will help Spark users to determine the optimal number of partitions.

      Expected use-cases:

      • repartition with the calculated parallel tasks

      Notes:

      • `SparkContext.maxNumConcurrentTasks` might help but it cannot be accessed by Spark apps.
      • `SparkContext.getExecutorMemoryStatus` might help to calculate the number of available slots to process tasks.

      Attachments

        Activity

          People

            Unassigned Unassigned
            moomindani Noritaka Sekiyama
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: