Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
None
-
None
-
None
-
None
Description
When running a MapReduce job that has a large jar on the class path (eg, jruby from hbase), the task tracker takes a large amount of CPU time during startup. Using jstack, I got the following stack trace:
"Thread-8941" daemon prio=10 tid=0x00002aab08005c00 nid=0x2807 waiting on condition [0x0000000043eca000..0x0000000043ecbc90]
java.lang.Thread.State: RUNNABLE
at java.lang.StringCoding$StringEncoder.encode(StringCoding.java:232)
at java.lang.StringCoding.encode(StringCoding.java:272)
at java.lang.String.getBytes(String.java:947)
at java.io.UnixFileSystem.getLength(Native Method)
at java.io.File.length(File.java:848)
at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:428)
at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
at org.apache.hadoop.filecache.DistributedCache.getLocalCache(DistributedCache.java:210)
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:178)
Running the system "du" returns very quickly on the directory.
Attachments
Issue Links
- duplicates
-
HADOOP-4780 Task Tracker burns a lot of cpu in calling getLocalCache
- Closed