Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-50277

[k8s] Apply for executor pods in parallel

Attach filesAttach ScreenshotAdd voteVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.5.1
    • None
    • k8s, Kubernetes
    • None

    Description

      The performance of spark on k8s is worse than that of yarn. It is found that the application of executor pod is executed sequentially. The k8s interface for applying pod is kubernetesClient.pods().inNamespace(namespace).resource(podWithAttachedContainer).create(), which is asynchronous. However, each execution still takes an average of 62.57ms. Applying 280 pods takes 17520ms, which means that the speed of applying pod is about 15-16 pods/second. If a job requires more executors, this speed will become a bottleneck. I would like to ask whether this logic can be changed to concurrently apply for executor pods, and whether there will be any negative impact.

      The logic of applying for executor is in method: org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator#requestNewExecutors

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned
            zgzzbws Bowen

            Dates

              Created:
              Updated:

              Slack

                Issue deployment