Details
-
Bug
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
0.23.3
-
None
-
Reviewed
Description
The historyserver can serve up links to jobs that become useless well before the job history files are purged. For example on a large, heavily used cluster we can end up rotating through the maximum number of jobs the historyserver can track fairly quickly. If a user was investigating an issue with a job using a saved historyserver URL, that URL can become useless because the historyserver has forgotten about the job even though the history files are still sitting in HDFS.
We can tell the historyserver to keep track of more jobs by increasing mapreduce.jobhistory.joblist.cache.size, but this has a direct impact on the responsiveness of the main historyserver page since it serves up all the entries to the client at once. It looks like Hadoop 1.x avoided this issue by encoding the history file location into the URLs served up by the historyserver, so it didn't have to track a mapping between job ID and history file location.