Details
-
Improvement
-
Status: Resolved
-
Minor
-
Resolution: Won't Fix
-
2.4.0
-
None
-
None
Description
As a user of Spark, I would like to configure the timeout that controls final termination after stopping the streaming context and while processing previously queued jobs. Currently, there is a hard-coded limit of one hour around line 129 in the JobScheduler.stop() method:
// Wait for the queued jobs to complete if indicated val terminated = if (processAllReceivedData) { jobExecutor.awaitTermination(1, TimeUnit.HOURS) // just a very large period of time } else { jobExecutor.awaitTermination(2, TimeUnit.SECONDS) }
It would provide additional functionality to the Spark platform if this value was configurable. My use case may take many hours to finish the queued job as it was created from a large data file.
Attachments
Issue Links
- links to