Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-3066

Hadoop leaves orphaned tasks running after job is killed



    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • nodemanager
    • None
    • Hadoop 2.4.1 (probably all later too), FreeBSD-10.1

    • Bug 21156330 - Solaris should provide a setsid(1) command to run a command in a new session


      When spawning user task, node manager checks for setsid(1) utility and spawns task program via it. See hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java for instance:

      String exec = Shell.isSetsidAvailable? "exec setsid" : "exec";

      FreeBSD, unlike Linux, does not have setsid(1) utility. So plain "exec" is used to spawn user task. If that task spawns other external programs (this is common case if a task program is a shell script) and user kills job via mapred job -kill <Job>, these child processes remain running.

      1) Why do you silently ignore the absence of setsid(1) and spawn task process via exec: this is the guarantee to have orphaned processes when job is prematurely killed.
      2) FreeBSD has a replacement third-party program called ssid (which does almost the same as Linux's setsid). It would be nice to detect which binary is present during configure stage and put @SETSID@ macros into java file to use the correct name.

      I propose to make Shell.isSetsidAvailable test more strict and fail to start if it is not found: at least we will know about the problem at start rather than guess why there are orphaned tasks running forever.


        Issue Links



              Unassigned Unassigned
              trtrmitya Dmitry Sivachenko
              0 Vote for this issue
              10 Start watching this issue