Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.20.205.0, 1.0.1
-
None
-
hadoop version -------Hadoop 0.20.2-cdh3u3
uname -a: Linux xxxx 2.6.18-194.17.4.0.1.el5PAE #1 SMP Tue Oct 26 20:15:18 EDT 2010 i686 i686 i386 GNU/Linux
core-site.xml:<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://xxxxx:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp20/</value>
</property>
</configuration>mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>192.168.9.60:9001</value>
</property>
<property> <name>mapred.local.dir</name> <value>/var/tmp/mapred/local</value> </property>
<property> <name>mapred.system.dir</name> <value>/mapred/system</value> </property>
</configuration>hadoop version -------Hadoop 0.20.2-cdh3u3 uname -a: Linux xxxx 2.6.18-194.17.4.0.1.el5PAE #1 SMP Tue Oct 26 20:15:18 EDT 2010 i686 i686 i386 GNU/Linux core-site.xml:<configuration> <property> <name>fs.default.name</name> <value>hdfs://xxxxx:8020</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/tmp20/</value> </property> </configuration> mapred-site.xml: <configuration> <property> <name>mapred.job.tracker</name> <value>192.168.9.60:9001</value> </property> <property> <name>mapred.local.dir</name> <value>/var/tmp/mapred/local</value> </property> <property> <name>mapred.system.dir</name> <value>/mapred/system</value> </property> </configuration>
-
mapreduce tasktracker
Description
hello,I have dwelled on this hadoop(cdhu3) problem for 2 days,I have tried every google method.This is the issue: when ran hadoop example "wordcount" ,the tasktracker's log in one slave node presented such errors
1.WARN org.apache.hadoop.mapred.DefaultTaskController: Task wrapper stderr: bash: /var/tmp/mapred/local/ttprivate/taskTracker/hdfs/jobcache/job_201203131751_0003/attempt_201203131751_0003_m_000006_0/taskjvm.sh: Permission denied
2.WARN org.apache.hadoop.mapred.TaskRunner: attempt_201203131751_0003_m_000006_0 : Child Error java.io.IOException: Task process exit with nonzero status of 126.
3.WARN org.apache.hadoop.mapred.TaskLog: Failed to retrieve stdout log for task: attempt_201203131751_0003_m_000003_0 java.io.FileNotFoundException: /usr/lib/hadoop-0.20/logs/userlogs/job_201203131751_0003/attempt_201203131751_0003_m_000003_0/log.index (No such file or directory)
I could not find similar issues in google,just got some posts seem a little relevant ,which suggest: A. the ulimit of hadoop user---but my ulimit is set large enough for this bundled example;B. the memory used by jvm,but my jvm only use Xmx200m,too small to exceed the limit of my machine ;C.the privilege of the mapred.local.dir and logs dir--I set them by "chmod 777";D .the disk space is full---there are enough space for hadoop in my log directory and mapred.local.dir.
Thanks for you all,I am really at my wit's end,I have spend days on it. I really appreciate any light!