Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-20286

dynamicAllocation.executorIdleTimeout is ignored after unpersist

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.0.1
    • 3.0.0
    • Spark Core
    • None

    Description

      With dynamic allocation enabled, it seems that executors with cached data which are unpersisted are still being killed using the dynamicAllocation.cachedExecutorIdleTimeout configuration, instead of dynamicAllocation.executorIdleTimeout. Assuming the default configuration (dynamicAllocation.cachedExecutorIdleTimeout = Infinity), an executor with unpersisted data won't be released until the job ends.

      How to reproduce

      • Set different values for dynamicAllocation.executorIdleTimeout and dynamicAllocation.cachedExecutorIdleTimeout
      • Load a file into a RDD and persist it
      • Execute an action on the RDD (like a count) so some executors are activated.
      • When the action has finished, unpersist the RDD
      • The application UI removes correctly the persisted data from the Storage tab, but if you look in the Executors tab, you will find that the executors remain active until (dynamicAllocation.cachedExecutorIdleTimeout is reached.

      Attachments

        Issue Links

          Activity

            People

              vanzin Marcelo Masiero Vanzin
              mpasa Miguel PĂ©rez
              Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: