Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-35677

Support dynamic executor range for dynamic allocation

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: In Progress
    • Major
    • Resolution: Unresolved
    • 3.2.0
    • None
    • Spark Core, SQL
    • None

    Description

      Currently, Spark allows users to set scalability within a Spark application using dynamic allocation. spark.dynamicAllocation.minExecutors & spark.dynamicAllocation.maxExecutors are used for scaling up and down. Within an application,Spark tactfully use them to request executors from cluster manager according to the real-time workload. Once set, the range is fixed through the whole application lifecycle. This is not very convenient for long-running application when the range should be changeable for some cases, such as:
      1. the cluster manager itself or the queue will scale up and down, which looks very likely to happen in modern cloud platforms
      2. the application is long-running, but the timeliness, priority, e.t.c are not only determined by the workload with the application, but also by the traffic across the cluster manager or just different moments
      3. e.t.c.

      Attachments

        Activity

          People

            Unassigned Unassigned
            Qin Yao Kent Yao 2
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated: