Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.3.2
-
None
-
None
Description
The current strategy of writing a file per target map is consuming a lot of unused buffer space (causing out of memory crashes) and puts a lot of burden on the FS (many opens, inodes used, etc).
I propose that we write a single file containing all output and also write an index file IDing which byte range in the file goes to each reduce. This will remove the issue of buffer waste, address scaling issues with number of open files and generally set us up better for scaling. It will also have advantages with very small inputs, since the buffer cache will reduce the number of seeks needed and the data serving node can open a single file and just keep it open rather than needing to do directory and open ops on every request.
The only issue I see is that in cases where the task output is substantiallyu larger than its input, we may need to spill multiple times. In this case, we can do a merge after all spills are complete (or during the final spill).
Attachments
Attachments
Issue Links
- incorporates
-
HADOOP-717 When there are few reducers, sorting should be done by mappers
- Closed
- is cloned by
-
HADOOP-570 Map tasks may fail due to out of memory, if the number of reducers are moderately big
- Closed