Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-47383

Support `spark.shutdown.timeout` config

    XMLWordPrintableJSON

Details

    Description

      org.apache.spark.util.ShutdownHookManager is used to register custom shutdown operations. This is not easily configurable. The underlying org.apache.hadoop.util.ShutdownHookManager has a default timeout of 30 seconds.  It can be configured by setting hadoop.service.shutdown.timeout, but this must be done in the core-site.xml/core-default.xml because a new hadoop conf object is created and there is no opportunity to modify it.

      org.apache.hadoop.util.ShutdownHookManager provides an overload to pass a custom timeout. Spark should use that and allow a user defined timeout to be used.

      This is useful because we see timeouts during shutdown and want to give some extra time for the event queues to drain to avoid log data loss.

      Attachments

        Issue Links

          Activity

            People

              robreeves Rob Reeves
              robreeves Rob Reeves
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: