Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-4136

Hadoop streaming might succeed even through reducer fails

    Details

    • Type: Bug Bug
    • Status: Patch Available
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 0.20.205.0
    • Fix Version/s: None
    • Component/s: contrib/streaming
    • Labels:

      Description

      Hadoop streaming can even succeed even though the reducer has failed. This happens when Hadoop calls PipeReducer.close(), but in the mean time the reducer has failed and the process has died. When clientOut_.flush() throws an IOException in PipeMapRed.mapRedFinish() this exception is caught but only logged. The exit status of the child process is never checked and task is marked as successful.

      I've attached a patch that seems to fix it for us.

        Activity

        No work has yet been logged on this issue.

          People

          • Assignee:
            Unassigned
            Reporter:
            Wouter de Bie
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:

              Development