Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-13837

Always get unable to kill error message even the hadoop process was successfully killed

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Critical
    • Resolution: Duplicate
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: scripts
    • Labels:
      None
    • Target Version/s:

      Description

      Reproduce steps

      1. Setup a hadoop cluster
      2. Stop resource manager : yarn --daemon stop resourcemanager
      3. Stop node manager : yarn --daemon stop nodemanager
        WARNING: nodemanager did not stop gracefully after 5 seconds: Trying to kill with kill -9
        ERROR: Unable to kill 20325

      it always gets "Unable to kill <nm_pid>" error message, this gives user impression there is something wrong with the node manager process because it was not able to be forcibly killed. But in fact, the kill command works as expected.

      This was because hadoop-functions.sh did not check process existence after kill properly. Currently it checks the process liveness right after the kill command

      ...
      kill -9 "${pid}" >/dev/null 2>&1
      if ps -p "${pid}" > /dev/null 2>&1; then
            hadoop_error "ERROR: Unable to kill ${pid}"
      ...
      

      when resource manager stopped before node managers, it always takes some additional time until the process completely terminates. I tried to print output of ps -p <nm_pid> in a while loop after kill -9, and found following

      16212 ?        00:00:11 java <defunct>
      0
        PID TTY          TIME CMD
      16212 ?        00:00:11 java <defunct>
      0
        PID TTY          TIME CMD
      16212 ?        00:00:11 java <defunct>
      0
        PID TTY          TIME CMD
      1
        PID TTY          TIME CMD
      1
        PID TTY          TIME CMD
      1
        PID TTY          TIME CMD
      ...
      

      in the first 3 times of the loop, the process did not terminate so the exit code of ps -p are still 0

      Proposal of a fix

      Firstly I was thinking to add a more comprehensive pid check, it checks the pid liveness until reaches the HADOOP_STOP_TIMEOUT, but this seems to add too much complexity. Second fix was to simply add a sleep 3 after kill -9, it should fix the error in most cases with relative small changes to the script.

        Attachments

        1. check_proc.sh
          0.2 kB
          Weiwei Yang
        2. HADOOP-13837.01.patch
          2 kB
          Weiwei Yang
        3. HADOOP-13837.02.patch
          2 kB
          Weiwei Yang
        4. HADOOP-13837.03.patch
          1 kB
          Weiwei Yang
        5. HADOOP-13837.04.patch
          1 kB
          Weiwei Yang
        6. HADOOP-13837.05.patch
          1 kB
          Weiwei Yang

          Issue Links

            Activity

              People

              • Assignee:
                cheersyang Weiwei Yang
                Reporter:
                cheersyang Weiwei Yang
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: