Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-24248

[K8S] Use the Kubernetes cluster as the backing store for the state of pods

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.3.0
    • 2.4.0
    • Kubernetes, Spark Core
    • None

    Description

      We have a number of places in KubernetesClusterSchedulerBackend right now that maintains the state of pods in memory. However, the Kubernetes API can always give us the most up to date and correct view of what our executors are doing. We should consider moving away from in-memory state as much as can in favor of using the Kubernetes cluster as the source of truth for pod status. Maintaining less state in memory makes it so that there's a lower chance that we accidentally miss updating one of these data structures and breaking the lifecycle of executors.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              mcheah Matt Cheah
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: