Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-5308

Shuffling to memory can get out-of-sync when fetching multiple compressed map outputs

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.3-alpha, 0.23.8
    • Fix Version/s: 2.1.0-beta, 0.23.9
    • Component/s: None
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      When a reducer is fetching multiple compressed map outputs from a host, the fetcher can get out-of-sync with the IFileInputStream, causing several of the maps to fail to fetch.

      This occurs because decompressors can return all the decompressed bytes before actually processing all the bytes in the compressed stream (due to checksums or other trailing data that we ignore). In the unfortunate case where these extra bytes cross an io.file.buffer.size boundary, some extra bytes will be left over and the next map_output will not fetch correctly (usually due to an invalid map_id).

      This scenario is not typically fatal to a job because the failure is charged to the map_output immediately following the "bad" one and the subsequent retry will normally work.

      1. MAPREDUCE-5308.patch
        6 kB
        Nathan Roberts
      2. MAPREDUCE-5308-branch-0.23.txt
        6 kB
        Nathan Roberts

        Activity

          People

          • Assignee:
            Nathan Roberts
            Reporter:
            Nathan Roberts
          • Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development