Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-12486

Executors are not always terminated successfully by the worker.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 1.6.1, 2.0.0
    • Spark Core
    • None

    Description

      There are cases when the executor is not killed successfully by the worker.
      One way this can happen is if the executor is in a bad state, fails to heartbeat and the master tells the worker to kill the executor. The executor is in such a bad state that the kill request is ignored. This seems to be able to happen if the executor is in heavy GC.

      The cause of this is that the Process.destroy() API is not forceful enough. In Java8, a new API, destroyForcibly() was added. We should use that if available.

      Attachments

        Activity

          People

            nongli Nong Li
            nongli Nong Li
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: