Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-4705

Historyserver links expire before the history data does

    Details

      Description

      The historyserver can serve up links to jobs that become useless well before the job history files are purged. For example on a large, heavily used cluster we can end up rotating through the maximum number of jobs the historyserver can track fairly quickly. If a user was investigating an issue with a job using a saved historyserver URL, that URL can become useless because the historyserver has forgotten about the job even though the history files are still sitting in HDFS.

      We can tell the historyserver to keep track of more jobs by increasing mapreduce.jobhistory.joblist.cache.size, but this has a direct impact on the responsiveness of the main historyserver page since it serves up all the entries to the client at once. It looks like Hadoop 1.x avoided this issue by encoding the history file location into the URLs served up by the historyserver, so it didn't have to track a mapping between job ID and history file location.

        Activity

        No work has yet been logged on this issue.

          People

          • Assignee:
            Jason Lowe
            Reporter:
            Jason Lowe
          • Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development