Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12109

"fs" java.net.UnknownHostException when HA NameNode is used

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Not A Bug
    • Affects Version/s: 2.8.0
    • Fix Version/s: None
    • Component/s: fs
    • Labels:
      None
    • Environment:

      Description

      After setting up an HA NameNode configuration, the following invocation of "fs" fails:

      [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
      -ls: java.net.UnknownHostException: saccluster

      It works if properties are defined as per below:

      /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

      These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as per below:

      <property>
      <name>dfs.nameservices</name>
      <value>saccluster</value>
      </property>
      <property>
      <name>dfs.ha.namenodes.saccluster</name>
      <value>namenode01,namenode02</value>
      </property>
      <property>
      <name>dfs.namenode.rpc-address.saccluster.namenode01</name>
      <value>namenode01:8020</value>
      </property>
      <property>
      <name>dfs.namenode.rpc-address.saccluster.namenode02</name>
      <value>namenode02:8020</value>
      </property>
      <property>
      <name>dfs.namenode.http-address.saccluster.namenode01</name>
      <value>namenode01:50070</value>
      </property>
      <property>
      <name>dfs.namenode.http-address.saccluster.namenode02</name>
      <value>namenode02:50070</value>
      </property>
      <property>
      <name>dfs.namenode.shared.edits.dir</name>
      <value>qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster</value>
      </property>
      <property>
      <name>dfs.client.failover.proxy.provider.mycluster</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
      </property>

      In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as per below:

      <property>
      <name>fs.defaultFS</name>
      <value>hdfs://saccluster</value>
      </property>

      In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

      export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

      Is "fs" trying to read these properties from somewhere else, such as a separate client configuration file?

      Apologies if I am missing something obvious here.

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              luigidifraia Luigi Di Fraia
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: