Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.17.0
-
None
-
None
-
Reviewed
Description
Currently, there is a configurable size that must be free for a task tracker to accept a new task. However, that isn't a very good model of what the task is likely to take. I'd like to propose:
Map tasks: totalInputSize * conf.getFloat("map.output.growth.factor", 1.0) / numMaps
Reduce tasks: totalInputSize * 2 * conf.getFloat("map.output.growth.factor", 1.0) / numReduces
where totalInputSize is the size of all the maps inputs for the given job.
To start a new task,
newTaskAllocation + (sum over running tasks of (1.0 - done) * allocation) >=
free disk * conf.getFloat("mapred.max.scratch.allocation", 0.90);
So in English, we will model the expected sizes of tasks and only task tasks that should leave us a 10% margin. With:
map.output.growth.factor – the relative size of the transient data relative to the map inputs
mapred.max.scratch.allocation – the maximum amount of our disk we want to allocate to tasks.
Attachments
Attachments
Issue Links
- incorporates
-
HADOOP-3441 Pass the size of the MapReduce input to JobInProgress
- Closed