Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-4772

Fetch failures can take way too long for a map to be restarted

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Critical
    • Resolution: Fixed
    • 0.23.4
    • 2.0.3-alpha, 0.23.5
    • mrv2
    • None

    Description

      In one particular case we saw a NM go down at just the right time, that most of the reducers got the output of the map tasks, but not all of them.

      The ones that failed to get the output reported to the AM rather quickly that they could not fetch from the NM, but because the other reducers were still running the AM would not relaunch the map task because there weren't more than 50% of the running reducers that had reported fetch failures. Then because of the exponential back-off for fetches on the reducers it took until 1 hour 45 min for the reduce tasks to hit another 10 fetch failures and report in again. At that point the other reducers had finished and the job relaunched the map task. If the reducers had still been running at 1:45 I have no idea how long it would have taken for each of the tasks to get to 30 fetch failures.

      We need to trigger the map based off of percentage of reducers shuffling, not percentage of reducers running, we also need to have a maximum limit of the back off, so that we don't ever have the reducer waiting for days to try and fetch map output.

      Attachments

        1. MR-4772-trunk.txt
          21 kB
          Robert Joseph Evans
        2. MR-4772-0.23.txt
          21 kB
          Robert Joseph Evans

        Activity

          People

            revans2 Robert Joseph Evans
            revans2 Robert Joseph Evans
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: