Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-4733

Reducer can fail to make progress during shuffle if too many reducers complete consecutively

    XMLWordPrintableJSON

Details

    Description

      TaskAttemptListenerImpl implements getMapCompletionEvents by calling Job.getTaskAttemptCompletionEvents with the same fromEvent and maxEvents passed in from the reducer and then filtering the result for just map events. We can't filter the task completion event list and expect the caller's "window" into the list to match up. As soon as a reducer event appears in the list it means we are redundantly sending map completion events that were already seen by the reducer.

      Worst case the reducer will hang if all of the events in the requested window are reducer events. In that case zero events will be reported back to the caller and it won't bump up fromEvent on the next call. Reducer then never sees the final map completion events needed to complete the shuffle. This could happen in a case where all maps complete, more than MAX_EVENTS reducers complete consecutively, but some straggling reducers get fetch failures and cause a map to be restarted.

      Attachments

        1. MAPREDUCE-4733.patch
          22 kB
          Jason Darrell Lowe
        2. MAPREDUCE-4733.patch
          21 kB
          Jason Darrell Lowe
        3. MAPREDUCE-4733.patch
          20 kB
          Jason Darrell Lowe

        Issue Links

          Activity

            People

              jlowe Jason Darrell Lowe
              jlowe Jason Darrell Lowe
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: