Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-3343

TaskTracker Out of Memory because of distributed cache

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.20.205.0
    • Fix Version/s: 1.0.1
    • Component/s: mrv1
    • Labels:
    • Target Version/s:

      Description

      This Out of Memory happens when you run large number of jobs (using the distributed cache) on a TaskTracker.

      Seems the basic issue is with the distributedCacheManager (instance of TrackerDistributedCacheManager in TaskTracker.java), this gets created during TaskTracker.initialize(), and it keeps references to TaskDistributedCacheManager for every submitted job via the jobArchives Map, also references to CacheStatus via cachedArchives map. I am not seeing these cleaned up between jobs, so this can out of memory problems after really large number of jobs are submitted. We have seen this issue in a number of cases.

        Activity

        No work has yet been logged on this issue.

          People

          • Assignee:
            zhaoyunjiong
            Reporter:
            Ahmed Radwan
          • Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development