When you have a long running job, it may be deleted from UI quickly when it completes, if you happen to run a small job after it. It's pretty annoying when you run lots of jobs in the same driver concurrently (e.g., running multiple Structured Streaming queries). We should sort jobs/stages with the completed timestamp before cleaning up them.
In 2.2, Spark has a separated buffer for completed jobs/stages, so it doesn't need to sort the jobs/stages.
What's the behavior I expect:
Set "spark.ui.retainedJobs" to 10 and run the following codes, job 0 should be kept in the Spark UI.