Details

    • Type: Sub-task
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.3-alpha
    • Fix Version/s: 2.6.0
    • Component/s: resourcemanager
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      YARN currently has the following config:

      yarn.resourcemanager.am.max-retries

      This config defaults to 2, and defines how many times to retry a "failed" AM before failing the whole YARN job. YARN counts an AM as failed if the node that it was running on dies (the NM will timeout, which counts as a failure for the AM), or if the AM dies.

      This configuration is insufficient for long running (or infinitely running) YARN jobs, since the machine (or NM) that the AM is running on will eventually need to be restarted (or the machine/NM will fail). In such an event, the AM has not done anything wrong, but this is counted as a "failure" by the RM. Since the retry count for the AM is never reset, eventually, at some point, the number of machine/NM failures will result in the AM failure count going above the configured value for yarn.resourcemanager.am.max-retries. Once this happens, the RM will mark the job as failed, and shut it down. This behavior is not ideal.

      I propose that we add a second configuration:

      yarn.resourcemanager.am.retry-count-window-ms

      This configuration would define a window of time that would define when an AM is "well behaved", and it's safe to reset its failure count back to zero. Every time an AM fails the RmAppImpl would check the last time that the AM failed. If the last failure was less than retry-count-window-ms ago, and the new failure count is > max-retries, then the job should fail. If the AM has never failed, the retry count is < max-retries, or if the last failure was OUTSIDE the retry-count-window-ms, then the job should be restarted. Additionally, if the last failure was outside the retry-count-window-ms, then the failure count should be set back to 0.

      This would give developers a way to have well-behaved AMs run forever, while still failing mis-behaving AMs after a short period of time.

      I think the work to be done here is to change the RmAppImpl to actually look at app.attempts, and see if there have been more than max-retries failures in the last retry-count-window-ms milliseconds. If there have, then the job should fail, if not, then the job should go forward. Additionally, we might also need to add an endTime in either RMAppAttemptImpl or RMAppFailedAttemptEvent, so that the RmAppImpl can check the time of the failure.

      Thoughts?

        Attachments

        1. YARN-611.1.patch
          54 kB
          Xuan Gong
        2. YARN-611.10.patch
          49 kB
          Xuan Gong
        3. YARN-611.11.patch
          50 kB
          Xuan Gong
        4. YARN-611.2.patch
          70 kB
          Xuan Gong
        5. YARN-611.3.patch
          71 kB
          Xuan Gong
        6. YARN-611.4.patch
          98 kB
          Xuan Gong
        7. YARN-611.4.rebase.patch
          97 kB
          Xuan Gong
        8. YARN-611.5.patch
          107 kB
          Xuan Gong
        9. YARN-611.6.patch
          32 kB
          Xuan Gong
        10. YARN-611.7.patch
          34 kB
          Xuan Gong
        11. YARN-611.8.patch
          34 kB
          Xuan Gong
        12. YARN-611.9.patch
          48 kB
          Xuan Gong
        13. YARN-611.9.rebase.patch
          49 kB
          Xuan Gong

          Issue Links

            Activity

              People

              • Assignee:
                xgong Xuan Gong
                Reporter:
                criccomini Chris Riccomini
              • Votes:
                0 Vote for this issue
                Watchers:
                21 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: