Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-24105

Spark 2.3.0 on kubernetes

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Won't Fix
    • 2.3.0
    • None
    • Kubernetes, Spark Core
    • None

    Description

      Right now its only possible to define node selector configurations thruspark.kubernetes.node.selector.[labelKey]. This gets used for both driver & executor pods. Without the capability to isolate driver & executor pods, the cluster can run into a livelock scenario, where if there are a lot of spark submits, can cause the driver pods to fill up the cluster capacity, with no room for executor pods to do any work.

       

      To avoid this deadlock, its required to support node selector (in future affinity/anti-affinity) configruation by driver & executor.

       

      Attachments

        Activity

          People

            Unassigned Unassigned
            gitfy Lenin
            Votes:
            1 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: