Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-5652

Reduce does not respect in-memory segment memory limit when number of on disk segments == io.sort.factor

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 0.20.0
    • Fix Version/s: 0.21.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      If the number of on-disk segments is exactly io.sort.factor, then map output segments may be left in memory for the reduce contrary to the specification in mapred.job.reduce.input.buffer.percent.

        Attachments

        1. 5652-1.patch
          0.8 kB
          Chris Douglas
        2. 5652-0.patch
          0.8 kB
          Chris Douglas

          Activity

            People

            • Assignee:
              chris.douglas Chris Douglas
              Reporter:
              chris.douglas Chris Douglas
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: