Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-331

map outputs should be written to a single output file with an index



    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.3.2
    • 0.10.0
    • None
    • None


      The current strategy of writing a file per target map is consuming a lot of unused buffer space (causing out of memory crashes) and puts a lot of burden on the FS (many opens, inodes used, etc).

      I propose that we write a single file containing all output and also write an index file IDing which byte range in the file goes to each reduce. This will remove the issue of buffer waste, address scaling issues with number of open files and generally set us up better for scaling. It will also have advantages with very small inputs, since the buffer cache will reduce the number of seeks needed and the data serving node can open a single file and just keep it open rather than needing to do directory and open ops on every request.

      The only issue I see is that in cases where the task output is substantiallyu larger than its input, we may need to spill multiple times. In this case, we can do a merge after all spills are complete (or during the final spill).


        1. 331.patch
          60 kB
          Devaraj Das
        2. 331.txt
          4 kB
          Devaraj Das
        3. 331-design.txt
          4 kB
          Devaraj Das
        4. 331-initial3.patch
          67 kB
          Devaraj Das

        Issue Links



              ddas Devaraj Das
              eric14 Eric Baldeschwieler
              2 Vote for this issue
              3 Start watching this issue