Hadoop Common
  1. Hadoop Common
  2. HADOOP-657

Free temporary space should be modelled better


    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.17.0
    • Fix Version/s: 0.19.0
    • Component/s: None
    • Labels:
    • Hadoop Flags:


      Currently, there is a configurable size that must be free for a task tracker to accept a new task. However, that isn't a very good model of what the task is likely to take. I'd like to propose:

      Map tasks: totalInputSize * conf.getFloat("map.output.growth.factor", 1.0) / numMaps
      Reduce tasks: totalInputSize * 2 * conf.getFloat("map.output.growth.factor", 1.0) / numReduces

      where totalInputSize is the size of all the maps inputs for the given job.

      To start a new task,
      newTaskAllocation + (sum over running tasks of (1.0 - done) * allocation) >=
      free disk * conf.getFloat("mapred.max.scratch.allocation", 0.90);

      So in English, we will model the expected sizes of tasks and only task tasks that should leave us a 10% margin. With:
      map.output.growth.factor – the relative size of the transient data relative to the map inputs
      mapred.max.scratch.allocation – the maximum amount of our disk we want to allocate to tasks.

      1. spaceest_717.patch
        17 kB
        Ari Rabkin
      2. clean_spaceest.patch
        17 kB
        Ari Rabkin
      3. diskspaceest_v4.patch
        18 kB
        Ari Rabkin
      4. diskspaceest_v3.patch
        18 kB
        Ari Rabkin
      5. diskspaceest_v2.patch
        18 kB
        Ari Rabkin
      6. diskspaceest.patch
        18 kB
        Ari Rabkin

        Issue Links


          No work has yet been logged on this issue.


            • Assignee:
              Ari Rabkin
              Owen O'Malley
            • Votes:
              0 Vote for this issue
              1 Start watching this issue


              • Created: