Details
-
Improvement
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
4.0.0
Description
org.apache.spark.util.ShutdownHookManager is used to register custom shutdown operations. This is not easily configurable. The underlying org.apache.hadoop.util.ShutdownHookManager has a default timeout of 30 seconds. It can be configured by setting hadoop.service.shutdown.timeout, but this must be done in the core-site.xml/core-default.xml because a new hadoop conf object is created and there is no opportunity to modify it.
org.apache.hadoop.util.ShutdownHookManager provides an overload to pass a custom timeout. Spark should use that and allow a user defined timeout to be used.
This is useful because we see timeouts during shutdown and want to give some extra time for the event queues to drain to avoid log data loss.
Attachments
Issue Links
- links to