Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-5716

yarn framework occurs Shell$ExitCodeException

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Invalid
    • 2.2.0
    • None
    • applicationmaster
    • None
    • hadoop2.2.0,linux redhat 5.5

    Description

      I use hadoop2.2.0 to run map-reduce task,at first I set the property "mapreduce.framework.name" with "local" in mapred-site.xml,everything goes fine ,and there is no exception,but when I run the mapreduce task on server cluster,setting the "mapreduce.framework.name" property value with "yarn",it shows exception belows:

      2014-01-10 14:51:03,131 INFO ContainersLauncher #0 org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: launchContainer: [/home/hadoop/hadoop-2.2.0/bin/container-executor, hadoop, 1, application_1389336249740_0001, container_1389336249740_000
      _01_000013, /new/hadoop/data/tmp/nm-local-dir/usercache/hadoop/appcache/application_1389336249740_0001/container_1389336249740_0001_01_000013, /new/hadoop/data/tmp/nm-local-dir/nmPrivate/application_1389336249740_0001/container_1389336249740_0001_01_00001
      /launch_container.sh, /new/hadoop/data/tmp/nm-local-dir/nmPrivate/application_1389336249740_0001/container_1389336249740_0001_01_000013/container_1389336249740_0001_01_000013.tokens, /new/hadoop/data/tmp/nm-local-dir/nmPrivate/container_1389336249740_0001
      01_000013.pid, /new/hadoop/data/tmp/nm-local-dir, /home/hadoop/hadoop-2.2.0/logs/userlogs, cgroups=none]
      2014-01-10 14:51:03,134 INFO ContainersLauncher #2 org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: launchContainer: [/home/hadoop/hadoop-2.2.0/bin/container-executor, hadoop, 1, application_1389336249740_0001, container_1389336249740_000
      _01_000014, /new/hadoop/data/tmp/nm-local-dir/usercache/hadoop/appcache/application_1389336249740_0001/container_1389336249740_0001_01_000014, /new/hadoop/data/tmp/nm-local-dir/nmPrivate/application_1389336249740_0001/container_1389336249740_0001_01_00001
      /launch_container.sh, /new/hadoop/data/tmp/nm-local-dir/nmPrivate/application_1389336249740_0001/container_1389336249740_0001_01_000014/container_1389336249740_0001_01_000014.tokens, /new/hadoop/data/tmp/nm-local-dir/nmPrivate/container_1389336249740_0001
      01_000014.pid, /new/hadoop/data/tmp/nm-local-dir, /home/hadoop/hadoop-2.2.0/logs/userlogs, cgroups=none]
      2014-01-10 14:51:03,142 INFO AsyncDispatcher event handler org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1389336249740_0001_01_000013 transitioned from LOCALIZED to RUNNING
      2014-01-10 14:51:03,144 INFO AsyncDispatcher event handler org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1389336249740_0001_01_000014 transitioned from LOCALIZED to RUNNING
      2014-01-10 14:51:03,144 WARN ContainersLauncher #4 org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exception from container-launch with container ID: container_1389336249740_0001_01_000016 and exit code: 127
      org.apache.hadoop.util.Shell$ExitCodeException:
      at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
      at org.apache.hadoop.util.Shell.run(Shell.java:379)
      at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
      at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:252)
      at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
      at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
      at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
      at java.util.concurrent.FutureTask.run(FutureTask.java:166)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
      at java.lang.Thread.run(Thread.java:636)
      2014-01-10 14:51:03,161 WARN ContainersLauncher #2 org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exit code from container container_1389336249740_0001_01_000014 is : 127

      my mapred-site.xml content:
      <configuration>
      <property>
      <name>mapreduce.jobtracker.address</name>
      <value>local</value>
      <description>The host and port that the MapReduce job tracker runs
      at. If "local", then jobs are run in-process as a single map
      and reduce task.
      </description>
      </property>

      <property>
      <name>mapreduce.jobtracker.http.address</name>
      <value>121server:50030</value>
      <description>
      The job tracker http server address and port the server will listen on.
      If the port is 0 then the server will start on a free port.
      </description>
      </property>

      <property>
      <name>mapreduce.job.maps</name>
      <value>40</value>
      <description>The default number of map tasks per job.
      Ignored when mapreduce.jobtracker.address is "local".
      </description>
      </property>
      <property>
      <name>mapreduce.framework.name</name>
      <value>classic</value>
      </property>
      </configuration>

      my yarn-site.xml content:

      <configuration>
      <!-- Site specific YARN configuration properties -->
      <property>
      <name>yarn.resourcemanager.resource-tracker.address</name>
      <value>152server:8990</value>
      <description>host is the hostname of the resource manager and
      port is the port on which the NodeManagers contact the Resource Manager.
      </description>
      </property>
      <property>
      <name>yarn.resourcemanager.scheduler.address</name>
      <value>152server:8991</value>
      <description>host is the hostname of the resourcemanager and port is the port
      on which the Applications in the cluster talk to the Resource Manager.
      </description>
      </property>
      <property>
      <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
      <description>In case you do not want to use the default scheduler</description>
      </property>
      <property>
      <name>yarn.resourcemanager.address</name>
      <value>152server:8993</value>
      <description>the host is the hostname of the ResourceManager and the port is the port on
      which the clients can talk to the Resource Manager. </description>
      </property>
      <property>
      <description>The address of the RM web application.</description>
      <name>yarn.resourcemanager.webapp.address</name>
      <value>152server:18088</value>
      </property>
      <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
      </property>
      <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
      </property>
      <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>5120</value>
      </property>
      </configuration>

      Attachments

        Activity

          People

            dimmacro dimmacro
            dimmacro dimmacro
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: