Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-34104

Allow users to specify a maximum decommissioning time

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.1.0, 3.1.1, 3.2.0
    • 3.2.0
    • Spark Core
    • None

    Description

      We currently have the ability for users to set the predicted time of the cluster manager or cloud provider to terminate a decommissioning executor, but for nodes where Spark it's self is triggering decommissioning we should add the ability of users to specify a maximum time we want to allow the executor to decommission.

       

      This is important especially if we start to in more places (like with excluded executors that are found to be flaky, that may or may not be able to decommission successfully).

      Attachments

        Activity

          People

            holden Holden Karau
            holden Holden Karau
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: