Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-35723

[K8s] Spark executors memory request should be allowed to deviate from limit

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.1.2
    • None
    • Kubernetes
    • None

    Description

      Currently the driver and executor memor requests always equals the limit.
      As stated in SPARK-23825, this is a reasonable default and is especially important for the driver.

      For executors however, it might be usefull for users to deviate from this default for executors.

      In typical development environments on K8s, the namespace quotas are an upper bound to the memory request that is possible.
      The limits however can be much higher. For development, spark is often run in client mode. While the driver should request the memory it needs, we want to leverage all the resources of the cluster whith executors if they are free - and can live with an executor maybe beeing killed eventually.

      Thus I propose the introduction of {{spark.

      {driver,executor}.limit.memory}} similar to the {{spark.{driver,executor}

      .limit.cpu}}.

      Attachments

        Activity

          People

            Unassigned Unassigned
            cth Christian Thiel
            Votes:
            1 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated: