Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Won't Fix
-
None
-
None
-
None
-
None
Description
When I ran the gridmix 2 benchmark load on a fresh cluster of 500 nodes with hadoop trunk,
the gridmix load, consisting of 202 map/reduce jobs of various sizes, completed in 32 minutes.
Then I ran the same set of the jobs on the same cluster, yhey completed in 43 minutes.
When I ran them the third times, it took (almost) forever — the job tracker became non-responsive.
The job tracker's heap size was set to 2GB.
The cluster is configured to keep up to 500 jobs in memory.
The job tracker kept one cpu busy all the time. Look like it was due to GC.
I believe the release 0.18/0.19 have the similar behavior.
I believe 0.18 and 0.18 also have the similar behavior.
Attachments
Attachments
Issue Links
- depends upon
-
HADOOP-4933 ConcurrentModificationException in JobHistory.java
- Closed
-
HADOOP-4966 Setup tasks are not removed from JobTracker's taskIdToTIPMap even after the job completes
- Closed
-
MAPREDUCE-488 JobTracker webui should report heap memory used
- Resolved
-
HADOOP-4934 Distinguish running/successful/failed/killed jobs in jobtracker's history
- Closed
- is part of
-
MAPREDUCE-331 Make jobtracker resilient to memory issues
- Resolved
- is related to
-
MAPREDUCE-291 Optionally a separate daemon should serve JobHistory
- Resolved