Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-13397

start-dfs.sh and hdfs --daemon start datanode say "ERROR: Cannot set priority of datanode process XXXX"

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Resolved
    • Major
    • Resolution: Invalid
    • 3.0.1
    • None
    • hdfs
    • None
    • This fix apparently does not work in all cases, will withdraw and re-post after further investigation

    Description

      When executing

      $HADOOP_HOME/bin/hdfs --daemon start datanode
      

      as a regular user (e.g. "hdfs") you achieve fail saying

      ERROR: Cannot set priority of datanode process XXXX
      

      where XXXX is some PID.

      It turned out that this is because at least on Gentoo Linux (and I think this is pretty well universal), by default a regular user process can't increase the priority of itself or any of the user's other processes. To fix this, I added these lines to /etc/security/limits.conf [NOTE: the users hdfs, yarn, and mapred are in the group called hadoop on this system]:

      @hadoop        hard    nice            -15
      @hadoop        hard    priority        -15
      

      This change will need to be made on all datanodes.

      The need to enable [at minimum] the hdfs user to raise its processes' priority needs to be added to the documentation. This is not a problem I observed under 3.0.0.

      Attachments

        Activity

          People

            Unassigned Unassigned
            Gravelator Jeff Hubbs
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: