Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-36575

Executor lost may cause spark stage to hang

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.3.3
    • 3.3.0
    • Scheduler
    • None

    Description

      When a executor finished a task of some stage, the driver will receive a `StatusUpdate` event to handle it. At the same time the driver found the executor heartbeat timed out, so the dirver also need handle ExecutorLost event simultaneously. There was a race condition issues here, which will make the task never been rescheduled again and the stage hang over.
      The problem is that `TaskResultGetter.enqueueSuccessfulTask` use asynchronous thread to handle successful task, that mean the synchronized lock of `TaskSchedulerImpl` was released prematurely during midway https://github.com/apache/spark/blob/branch-2.3/core/src/main/scala/org/apache/spark/scheduler/TaskResultGetter.scala#L61. So `TaskSchedulerImpl` may handle executorLost first, then the asynchronous thread will go on to handle successful task. It cause `TaskSetManager.successful` and `TaskSetManager.tasksSuccessful` wrong result. 

      Then `HeartbeatReceiver.expireDeadHosts` executed `killAndReplaceExecutor`, which make `TaskSchedulerImpl.executorLost` was executed twice. `copiesRunning(index) -= 1` were processed in `executorLost`, twice `executorLost` made `copiesRunning(index)` to -1, which lead stage to hang.

      related log when the issue produce: 
      21/08/05 02:58:14,784 INFO [dispatcher-event-loop-8] TaskSetManager: Starting task 4004.0 in stage 1328625.0 (TID 347212402, 10.109.89.3, executor 366724, partition 4004, ANY, 7994 bytes)
      21/08/05 03:00:24,126 ERROR [dispatcher-event-loop-4] TaskSchedulerImpl: Lost executor 366724 on 10.109.89.3: Executor heartbeat timed out after 140830 ms
      21/08/05 03:00:24,218 WARN [dispatcher-event-loop-4] TaskSetManager: Lost task 4004.0 in stage 1328625.0 (TID 347212402, 10.109.89.3, executor 366724): ExecutorLostFailure (executor 366724 exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 140830 ms
      21/08/05 03:00:24,542 INFO [task-result-getter-2] TaskSetManager: Finished task 4004.0 in stage 1328625.0 (TID 347212402) in 129758 ms on 10.109.89.3 (executor 366724) (3047/5400)

      21/08/05 03:00:34,621 INFO [dispatcher-event-loop-8] TaskSchedulerImpl: Executor 366724 on 10.109.89.3 killed by driver.
      21/08/05 03:00:34,771 INFO [spark-listener-group-executorManagement] ExecutorMonitor: Executor 366724 removed (new total is 793)

      21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] DAGScheduler: Executor lost: 366724 (epoch 417416)
      21/08/05 03:00:42,360 INFO [dispatcher-event-loop-14] BlockManagerMasterEndpoint: Trying to remove executor 366724 from BlockManagerMaster.
      21/08/05 03:00:42,360 INFO [dispatcher-event-loop-14] BlockManagerMasterEndpoint: Removing block manager BlockManagerId(366724, 10.109.89.3, 43402, None)
      21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] BlockManagerMaster: Removed 366724 successfully in removeExecutor
      21/08/05 03:00:42,360 INFO [dag-scheduler-event-loop] DAGScheduler: Shuffle files lost for executor: 366724 (epoch 417416)
      21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] DAGScheduler: Executor lost: 366724 (epoch 417473)
      21/08/05 03:00:44,584 INFO [dispatcher-event-loop-15] BlockManagerMasterEndpoint: Trying to remove executor 366724 from BlockManagerMaster.
      21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] BlockManagerMaster: Removed 366724 successfully in removeExecutor
      21/08/05 03:00:44,584 INFO [dag-scheduler-event-loop] DAGScheduler: Shuffle files lost for executor: 366724 (epoch 417473)

      Attachments

        Activity

          People

            sleep1661 hujiahua
            sleep1661 hujiahua
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: