Bigtop
  1. Bigtop
  2. BIGTOP-1129

Cannot stop datanode through init script

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Blocker Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.7.0
    • Fix Version/s: 0.7.0
    • Component/s: Init scripts
    • Labels:
      None
    • Environment:

      Centos

      Description

      sudo /etc/init.d/hadoop-hdfs-datanode stop

      When starting the datanode, I do see a correct pid file in
      /var/run/hadoop-hdfs/hadoop-hdfs-datanode.pid but whenever I call the stop command, the pid file disappear while the process still exists.

        Activity

        Hide
        Bruno Mahé added a comment -

        +1. Patch looks good
        Thanks for the quick patch!

        Show
        Bruno Mahé added a comment - +1. Patch looks good Thanks for the quick patch!
        Hide
        Konstantin Boudnik added a comment -

        +1

        Show
        Konstantin Boudnik added a comment - +1
        Hide
        Roman Shaposhnik added a comment -

        Bruno Mahé thanks a bunch for detailed instructions! It was indeed a difference between what we ship by default and what our Puppet deploys. I'm attaching a trivial patch that disables all the tweaks that are now coming to use as part of hadoop-env.sh from upstream. This makes it so that the file can still be used as an example, but it doesn't really mess our environment up.

        Please review ASAP.

        Show
        Roman Shaposhnik added a comment - Bruno Mahé thanks a bunch for detailed instructions! It was indeed a difference between what we ship by default and what our Puppet deploys. I'm attaching a trivial patch that disables all the tweaks that are now coming to use as part of hadoop-env.sh from upstream. This makes it so that the file can still be used as an example, but it doesn't really mess our environment up. Please review ASAP.
        Hide
        Bruno Mahé added a comment -

        I tried again tonight on some new instances and I can still reproduce it.
        I used the standard amazon ami with our centos 6 repo.

            1  wget http://mirrors.kernel.org/fedora-epel/6/i386/epel-release-6-8.noarch.rpm
            2  sudo rpm -Uvh epel-release-6-8.noarch.rpm 
            3  sudo vim /etc/yum.repos.d/epel.repo 
            4  sudo yum install nethogs sysstat htop tree java-1.6.0-openjdk-devel.x86_64
            5  sudo yum search hadoop
            6  sudo yum install hadoop-conf-pseudo.x86_64
            7  chkconfig --list
           12  pushd /etc/yum.repos.d/
           13  sudo wget http://bigtop01.cloudera.org:8080/view/Releases/job/Bigtop-0.7.0/label=centos6/lastSuccessfulBuild/artifact/output/bigtop.repo
           14  sudo vim bigtop.repo 
           15  yum search hadoop
           16  sudo yum install hadoop-client.x86_64 hadoop-hdfs-namenode.x86_64 hadoop-mapreduce.x86_64 hadoop-yarn.x86_64 hadoop-mapreduce-historyserver.x86_64 hadoop-yarn-resourcemanager.x86_64
           32  sudo cp namenode/* /etc/hadoop/conf/
           39  sudo umount /media/ephemeral0/
           40  sudo fdisk /dev/sdb
           41  sudo fdisk /dev/sdc
           42  sudo mkfs.ext4 /dev/sdb
           43  sudo mkfs.ext4 /dev/sdc
           44   sudo mkdir /local/data0
           45   sudo mkdir /local/data1
           46   sudo mount -t ext4 /dev/sdb /local/data0/
           47   sudo mount -t ext4 /dev/sdc /local/data1
           48  ls /var/lib/hadoop-hdfs/cache
           49   sudo mkdir -p /local/data1/hadoop/hdfs
           50   sudo chown -R hdfs /local/data1/hadoop/hdfs
           51   sudo cp -a /var/lib/hadoop-hdfs /local/data1/hadoop/hdfs/
           52   ls -al /local/data1/hadoop/hdfs/hadoop-hdfs/cache/
           53   ls -al /local/data1/hadoop/hdfs/hadoop-hdfs/
           54   ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/cache/
           55   ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/
           56   sudo mkdir -p /local/data0/hadoop/hdfs
           57   sudo chown -R hdfs /local/data1/hadoop/hdfs
           58   sudo cp -a /var/lib/hadoop-hdfs /local/data0/hadoop/hdfs/
           59   ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/cache/
           60   ls -al /local/data0/hadoop/hdfs
           61   ls -al /local/data1/hadoop/hdfs
           62  sudo vim /etc/hadoop/conf/core-site.xml 
           63  sudo vim /etc/hadoop/conf/hdfs-site.xml 
           64  sudo vim /etc/hadoop/conf/hadoop-env.sh 
           65  sudo /etc/init.d/hadoop-hdfs-namenode status
           66  sudo /etc/init.d/hadoop-hdfs-namenode start
           67  sudo /etc/init.d/hadoop-hdfs-namenode status
           68  less /var/log/hadoop-hdfs/hadoop-hdfs-namenode-ip-172-31-34-231.log 
           69  ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/cache/hdfs/dfs/name
           70  ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/cache/
           71  sudo /etc/init.d/hadoop-hdfs-namenode init
           72  sudo /etc/init.d/hadoop-hdfs-namenode status
           73  sudo /etc/init.d/hadoop-hdfs-namenode start
           74  sudo /etc/init.d/hadoop-hdfs-namenode status
           75  ps auxww | grep nameno
           76  sudo /etc/init.d/hadoop-hdfs-namenode stop
           77  ps auxww | grep nameno
        

        core-site.xml:

        [ec2-user@ip-172-31-34-231 ~]$ cat /etc/hadoop/conf/core-site.xml 
        <?xml version="1.0"?>
        <!--
          Licensed to the Apache Software Foundation (ASF) under one or more
          contributor license agreements.  See the NOTICE file distributed with
          this work for additional information regarding copyright ownership.
          The ASF licenses this file to You under the Apache License, Version 2.0
          (the "License"); you may not use this file except in compliance with
          the License.  You may obtain a copy of the License at
        
              http://www.apache.org/licenses/LICENSE-2.0
        
          Unless required by applicable law or agreed to in writing, software
          distributed under the License is distributed on an "AS IS" BASIS,
          WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
          See the License for the specific language governing permissions and
          limitations under the License.
        -->
        <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
        
        <configuration>
          <property>
            <name>fs.default.name</name>
            <value>hdfs://ec2-54-200-186-192.us-west-2.compute.amazonaws.com:8020</value>
          </property>
        
          <!-- OOZIE proxy user setting -->
          <property>
            <name>hadoop.proxyuser.oozie.hosts</name>
            <value>*</value>
          </property>
          <property>
            <name>hadoop.proxyuser.oozie.groups</name>
            <value>*</value>
          </property>
        
          <!-- HTTPFS proxy user setting -->
          <property>
            <name>hadoop.proxyuser.httpfs.hosts</name>
            <value>*</value>
          </property>
          <property>
            <name>hadoop.proxyuser.httpfs.groups</name>
            <value>*</value>
          </property>
        
        </configuration>
        

        hdfs-site.xml:

        [ec2-user@ip-172-31-34-231 ~]$ cat /etc/hadoop/conf/hdfs-site.xml 
        <?xml version="1.0"?>
        <!--
          Licensed to the Apache Software Foundation (ASF) under one or more
          contributor license agreements.  See the NOTICE file distributed with
          this work for additional information regarding copyright ownership.
          The ASF licenses this file to You under the Apache License, Version 2.0
          (the "License"); you may not use this file except in compliance with
          the License.  You may obtain a copy of the License at
        
              http://www.apache.org/licenses/LICENSE-2.0
        
          Unless required by applicable law or agreed to in writing, software
          distributed under the License is distributed on an "AS IS" BASIS,
          WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
          See the License for the specific language governing permissions and
          limitations under the License.
        -->
        <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
        
        <configuration>
          <property>
            <name>dfs.replication</name>
            <value>2</value>
          </property>
          <!-- Immediately exit safemode as soon as one DataNode checks in. 
               On a multi-node cluster, these configurations must be removed.  -->
          <property>
            <name>dfs.safemode.extension</name>
            <value>0</value>
          </property>
          <property>
             <name>dfs.safemode.min.datanodes</name>
             <value>1</value>
          </property>
          <property>
             <name>hadoop.tmp.dir</name>
             <value>/local/data1/hadoop/hdfs/hadoop-hdfs/cache/${user.name}</value>
          </property>
          <property>
             <name>dfs.namenode.name.dir</name>
             <value>file:///local/data0/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/name,file:///local/data1/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/name</value>
          </property>
          <property>
             <name>dfs.namenode.checkpoint.dir</name>
             <value>file:///local/data0/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/namesecondary,file:///local/data1/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/namesecondary</value>
          </property>
          <property>
             <name>dfs.datanode.data.dir</name>
             <value>file:///local/data0/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/data,file:///local/data1/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/data</value>
          </property>
        </configuration>
        

        I did not touch other files.

        Show
        Bruno Mahé added a comment - I tried again tonight on some new instances and I can still reproduce it. I used the standard amazon ami with our centos 6 repo. 1 wget http://mirrors.kernel.org/fedora-epel/6/i386/epel-release-6-8.noarch.rpm 2 sudo rpm -Uvh epel-release-6-8.noarch.rpm 3 sudo vim /etc/yum.repos.d/epel.repo 4 sudo yum install nethogs sysstat htop tree java-1.6.0-openjdk-devel.x86_64 5 sudo yum search hadoop 6 sudo yum install hadoop-conf-pseudo.x86_64 7 chkconfig --list 12 pushd /etc/yum.repos.d/ 13 sudo wget http://bigtop01.cloudera.org:8080/view/Releases/job/Bigtop-0.7.0/label=centos6/lastSuccessfulBuild/artifact/output/bigtop.repo 14 sudo vim bigtop.repo 15 yum search hadoop 16 sudo yum install hadoop-client.x86_64 hadoop-hdfs-namenode.x86_64 hadoop-mapreduce.x86_64 hadoop-yarn.x86_64 hadoop-mapreduce-historyserver.x86_64 hadoop-yarn-resourcemanager.x86_64 32 sudo cp namenode/* /etc/hadoop/conf/ 39 sudo umount /media/ephemeral0/ 40 sudo fdisk /dev/sdb 41 sudo fdisk /dev/sdc 42 sudo mkfs.ext4 /dev/sdb 43 sudo mkfs.ext4 /dev/sdc 44 sudo mkdir /local/data0 45 sudo mkdir /local/data1 46 sudo mount -t ext4 /dev/sdb /local/data0/ 47 sudo mount -t ext4 /dev/sdc /local/data1 48 ls /var/lib/hadoop-hdfs/cache 49 sudo mkdir -p /local/data1/hadoop/hdfs 50 sudo chown -R hdfs /local/data1/hadoop/hdfs 51 sudo cp -a /var/lib/hadoop-hdfs /local/data1/hadoop/hdfs/ 52 ls -al /local/data1/hadoop/hdfs/hadoop-hdfs/cache/ 53 ls -al /local/data1/hadoop/hdfs/hadoop-hdfs/ 54 ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/cache/ 55 ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/ 56 sudo mkdir -p /local/data0/hadoop/hdfs 57 sudo chown -R hdfs /local/data1/hadoop/hdfs 58 sudo cp -a /var/lib/hadoop-hdfs /local/data0/hadoop/hdfs/ 59 ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/cache/ 60 ls -al /local/data0/hadoop/hdfs 61 ls -al /local/data1/hadoop/hdfs 62 sudo vim /etc/hadoop/conf/core-site.xml 63 sudo vim /etc/hadoop/conf/hdfs-site.xml 64 sudo vim /etc/hadoop/conf/hadoop-env.sh 65 sudo /etc/init.d/hadoop-hdfs-namenode status 66 sudo /etc/init.d/hadoop-hdfs-namenode start 67 sudo /etc/init.d/hadoop-hdfs-namenode status 68 less /var/log/hadoop-hdfs/hadoop-hdfs-namenode-ip-172-31-34-231.log 69 ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/cache/hdfs/dfs/name 70 ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/cache/ 71 sudo /etc/init.d/hadoop-hdfs-namenode init 72 sudo /etc/init.d/hadoop-hdfs-namenode status 73 sudo /etc/init.d/hadoop-hdfs-namenode start 74 sudo /etc/init.d/hadoop-hdfs-namenode status 75 ps auxww | grep nameno 76 sudo /etc/init.d/hadoop-hdfs-namenode stop 77 ps auxww | grep nameno core-site.xml: [ec2-user@ip-172-31-34-231 ~]$ cat /etc/hadoop/conf/core-site.xml <?xml version="1.0"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.default.name</name> <value>hdfs://ec2-54-200-186-192.us-west-2.compute.amazonaws.com:8020</value> </property> <!-- OOZIE proxy user setting --> <property> <name>hadoop.proxyuser.oozie.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.oozie.groups</name> <value>*</value> </property> <!-- HTTPFS proxy user setting --> <property> <name>hadoop.proxyuser.httpfs.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.httpfs.groups</name> <value>*</value> </property> </configuration> hdfs-site.xml: [ec2-user@ip-172-31-34-231 ~]$ cat /etc/hadoop/conf/hdfs-site.xml <?xml version="1.0"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <!-- Immediately exit safemode as soon as one DataNode checks in. On a multi-node cluster, these configurations must be removed. --> <property> <name>dfs.safemode.extension</name> <value>0</value> </property> <property> <name>dfs.safemode.min.datanodes</name> <value>1</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/local/data1/hadoop/hdfs/hadoop-hdfs/cache/${user.name}</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///local/data0/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/name,file:///local/data1/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/name</value> </property> <property> <name>dfs.namenode.checkpoint.dir</name> <value>file:///local/data0/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/namesecondary,file:///local/data1/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/namesecondary</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///local/data0/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/data,file:///local/data1/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/data</value> </property> </configuration> I did not touch other files.
        Hide
        Roman Shaposhnik added a comment -

        Bruno, could you, please, provide a bit more details on this? The thing is – I can't seem to be able to repro it on my fully distributed cluster running CentOS 6 (deployed via our puppet code). Please let us know what OS you used,
        what packages you installed and what was the configuration in /etc/default/hadoop* and /etc/hadoop/conf that you used.

        Show
        Roman Shaposhnik added a comment - Bruno, could you, please, provide a bit more details on this? The thing is – I can't seem to be able to repro it on my fully distributed cluster running CentOS 6 (deployed via our puppet code). Please let us know what OS you used, what packages you installed and what was the configuration in /etc/default/hadoop* and /etc/hadoop/conf that you used.
        Hide
        Konstantin Boudnik added a comment -

        Looks like a blocker to me.

        Show
        Konstantin Boudnik added a comment - Looks like a blocker to me.
        Hide
        Bruno Mahé added a comment - - edited

        Similar experience with namenode.

        Initscripts for the yarn side work for me. So this issue seems to be limited to the hdfs side.

        Show
        Bruno Mahé added a comment - - edited Similar experience with namenode. Initscripts for the yarn side work for me. So this issue seems to be limited to the hdfs side.
        Hide
        Bruno Mahé added a comment - - edited

        Tried again:

        [ec2-user@ip-172-31-39-159 ~]$ ps auxww | grep datanode
        flume     1428  0.0  0.0 100944   568 ?        S    Oct27   0:00 tail -F /var/log/hadoop-hdfs/hadoop-hdfs-datanode-ip-172-31-39-159.log
        hdfs      1919  0.1  2.0 778104 154504 ?       Sl   Oct26   2:37 /usr/lib/jvm/java-openjdk/bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Xmx128m -Xmx128m -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-ip-172-31-39-159.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
        ec2-user  5850  0.0  0.0 103428   824 pts/0    R+   08:40   0:00 grep datanode
        
        [ec2-user@ip-172-31-39-159 ~]$ sudo /etc/init.d/hadoop-hdfs-datanode status
        Hadoop datanode is running                                 [  OK  ]
        
        [ec2-user@ip-172-31-39-159 ~]$ sudo /etc/init.d/hadoop-hdfs-datanode stop
        Stopping Hadoop datanode:                                  [  OK  ]
        no datanode to stop
        
        [ec2-user@ip-172-31-39-159 ~]$ sudo /etc/init.d/hadoop-hdfs-datanode status
        Hadoop datanode is not running                             [FAILED]
        
        [ec2-user@ip-172-31-39-159 ~]$ ps auxww | grep datanode
        flume     1428  0.0  0.0 100944   568 ?        S    Oct27   0:00 tail -F /var/log/hadoop-hdfs/hadoop-hdfs-datanode-ip-172-31-39-159.log
        hdfs      1919  0.1  2.0 778104 154504 ?       Sl   Oct26   2:37 /usr/lib/jvm/java-openjdk/bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Xmx128m -Xmx128m -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-ip-172-31-39-159.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
        ec2-user  5992  0.0  0.0 103428   828 pts/0    S+   08:40   0:00 grep datanode
        
        Show
        Bruno Mahé added a comment - - edited Tried again: [ec2-user@ip-172-31-39-159 ~]$ ps auxww | grep datanode flume 1428 0.0 0.0 100944 568 ? S Oct27 0:00 tail -F /var/log/hadoop-hdfs/hadoop-hdfs-datanode-ip-172-31-39-159.log hdfs 1919 0.1 2.0 778104 154504 ? Sl Oct26 2:37 /usr/lib/jvm/java-openjdk/bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Xmx128m -Xmx128m -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-ip-172-31-39-159.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode ec2-user 5850 0.0 0.0 103428 824 pts/0 R+ 08:40 0:00 grep datanode [ec2-user@ip-172-31-39-159 ~]$ sudo /etc/init.d/hadoop-hdfs-datanode status Hadoop datanode is running [ OK ] [ec2-user@ip-172-31-39-159 ~]$ sudo /etc/init.d/hadoop-hdfs-datanode stop Stopping Hadoop datanode: [ OK ] no datanode to stop [ec2-user@ip-172-31-39-159 ~]$ sudo /etc/init.d/hadoop-hdfs-datanode status Hadoop datanode is not running [FAILED] [ec2-user@ip-172-31-39-159 ~]$ ps auxww | grep datanode flume 1428 0.0 0.0 100944 568 ? S Oct27 0:00 tail -F /var/log/hadoop-hdfs/hadoop-hdfs-datanode-ip-172-31-39-159.log hdfs 1919 0.1 2.0 778104 154504 ? Sl Oct26 2:37 /usr/lib/jvm/java-openjdk/bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Xmx128m -Xmx128m -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-ip-172-31-39-159.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode ec2-user 5992 0.0 0.0 103428 828 pts/0 S+ 08:40 0:00 grep datanode

          People

          • Assignee:
            Roman Shaposhnik
            Reporter:
            Bruno Mahé
          • Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development