Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-1783

Task Initialization should be delayed till when a job can be run

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.20.1
    • 0.22.0, 0.23.0
    • contrib/fair-share
    • None
    • Reviewed

    Description

      The FairScheduler task scheduler uses PoolManager to impose limits on the number of jobs that can be running at a given time. However, jobs that are submitted are initiaiized immediately by EagerTaskInitializationListener by calling JobInProgress.initTasks. This causes the job split file to be read into memory. The split information is not needed until the number of running jobs is less than the maximum specified. If the amount of split information is large, this leads to unnecessary memory pressure on the Job Tracker.
      To ease memory pressure, FairScheduler can use another implementation of JobInProgressListener that is aware of PoolManager limits and can delay task initialization until the number of running jobs is below the maximum.

      Attachments

        1. 0001-Pool-aware-job-initialization.patch
          30 kB
          Ramkumar Vadali
        2. 0001-Pool-aware-job-initialization.patch.1
          30 kB
          Ramkumar Vadali
        3. submit-mapreduce-1783.patch
          29 kB
          Ramkumar Vadali
        4. MAPREDUCE-1783.patch
          13 kB
          Ramkumar Vadali

        Activity

          People

            rvadali Ramkumar Vadali
            rvadali Ramkumar Vadali
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: