Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-43496

Have a separate config for Memory limits for kubernetes pods

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.5.2, 3.4.4
    • None
    • Kubernetes

    Description

      Whole allocated memory to JVM is set into pod resources as both request and limits.

      This means there's not a way to use more memory for burst-like jobs in a shared environment.

      For example, if spark job uses external process (outside of JVM) to access data, a bit of extra memory required for that, and having configured higher limits for mem could be of use.

      Another thought here - have a way to configure different JVM/ pod memory request also could be a valid use case.

       

      Github PR: https://github.com/apache/spark/pull/41067

       

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              yerenkow Alexander Yerenkow
              Votes:
              2 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated: