Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-3446

The reduce task should not flush the in memory file system before starting the reducer

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Critical
    • Resolution: Fixed
    • None
    • 0.19.0
    • None
    • None
    • Reviewed

    Description

      In the case where the entire reduce inputs fit in ram, we currently force the input to disk and re-read it before giving it to the reducer. It would be much better if we merged from the ramfs and any spills to feed the reducer its input.

      Attachments

        1. 3446-0.patch
          14 kB
          Christopher Douglas
        2. 3446-1.patch
          15 kB
          Christopher Douglas
        3. 3446-2.patch
          30 kB
          Christopher Douglas
        4. 3446-3.patch
          35 kB
          Christopher Douglas
        5. 3446-4.patch
          45 kB
          Christopher Douglas
        6. 3446-5.patch
          41 kB
          Christopher Douglas
        7. 3446-6.patch
          41 kB
          Christopher Douglas
        8. 3446-7.patch
          41 kB
          Christopher Douglas

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            cdouglas Christopher Douglas
            omalley Owen O'Malley
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment