Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-34361

Dynamic allocation on K8s kills executors with running tasks

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0, 3.0.1, 3.0.2, 3.1.0, 3.1.1, 3.1.2, 3.2.0
    • Fix Version/s: 3.1.2, 3.2.0
    • Component/s: Kubernetes
    • Labels:
      None

      Description

      There is race between executor POD allocator and cluster scheduler backend.
      During downscaling (in dynamic allocation) we experienced a lot of killed new executors with running task on them.

      The pattern in the log is the following:

      21/02/01 15:12:03 INFO ExecutorMonitor: New executor 312 has registered (new total is 138)
      ...
      21/02/01 15:12:03 INFO TaskSetManager: Starting task 247.0 in stage 4.0 (TID 2079, 100.100.18.138, executor 312, partition 247, PROCESS_LOCAL, 8777 bytes)
      21/02/01 15:12:03 INFO ExecutorPodsAllocator: Deleting 3 excess pod requests (408,312,307).
      ...
      21/02/01 15:12:04 ERROR TaskSchedulerImpl: Lost executor 312 on 100.100.18.138: The executor with id 312 was deleted by a user or the framework.
      

        Attachments

          Activity

            People

            • Assignee:
              attilapiros Attila Zsolt Piros
              Reporter:
              attilapiros Attila Zsolt Piros
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: