Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-26872

Use a configurable value for final termination in the JobScheduler.stop() method

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Won't Fix
    • 2.4.0
    • None
    • Scheduler, Spark Core
    • None

    Description

      As a user of Spark, I would like to configure the timeout that controls final termination after stopping the streaming context and while processing previously queued jobs.  Currently, there is a hard-coded limit of one hour around line 129 in the JobScheduler.stop() method:

      // Wait for the queued jobs to complete if indicated
      val terminated = if (processAllReceivedData) {
      jobExecutor.awaitTermination(1, TimeUnit.HOURS) // just a very large period of time
      } else {
      jobExecutor.awaitTermination(2, TimeUnit.SECONDS)
      }
      

      It would provide additional functionality to the Spark platform if this value was configurable.  My use case may take many hours to finish the queued job as it was created from a large data file.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              smrosenberry Steven Rosenberry
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: