Description
When you have a long running job, it may be deleted from UI quickly when it completes, if you happen to run a small job after it. It's pretty annoying when you run lots of jobs in the same driver concurrently (e.g., running multiple Structured Streaming queries). We should sort jobs/stages with the completed timestamp before cleaning up them.
In 2.2, Spark has a separated buffer for completed jobs/stages, so it doesn't need to sort the jobs/stages.
What's the behavior I expect:
Set "spark.ui.retainedJobs" to 10 and run the following codes, job 0 should be kept in the Spark UI.
new Thread() { override def run() { // job 0 sc.makeRDD(1 to 1, 1).foreach { i => Thread.sleep(10000) } } }.start() Thread.sleep(1000) for (_ <- 1 to 20) { new Thread() { override def run() { sc.makeRDD(1 to 1, 1).foreach { i => } } }.start() } Thread.sleep(15000) sc.makeRDD(1 to 1, 1).foreach { i => }