Uploaded image for project: 'Ambari'
  1. Ambari
  2. AMBARI-13566

After upgrade Namenode fails to start when Kerberos is enabled using HDP 2.2.8.0

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 2.2.0
    • None
    • None

    Description

      After upgrading Ambari 1.7.0 (or 2.0.x) to Ambari 2.1.3 and then either
      enabling Kerberos or Kerberos was previously enabled, NameNode fails to start
      with the following error:

      2015-10-16 13:46:04,499 ERROR namenode.NameNode (NameNode.java:main(1645)) - Failed to start namenode.
      java.io.IOException: Login failure for nn/localhost@EXAMPLE.COM from keytab /etc/security/keytabs/nn.service.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user

      at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:976)
      at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:243)
      at org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:637)
      at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:656)
      at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:896)
      at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:880)
      at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1574)
      at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1640)
      Caused by: javax.security.auth.login.LoginException: Unable to obtain password from user

      at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:856)
      at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:719)
      at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:606)
      at javax.security.auth.login.LoginContext.invoke(LoginContext.java:762)
      at javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
      at javax.security.auth.login.LoginContext$4.run(LoginContext.java:690)
      at javax.security.auth.login.LoginContext$4.run(LoginContext.java:688)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:687)
      at javax.security.auth.login.LoginContext.login(LoginContext.java:595)
      at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:967)
      ... 7 more
      2015-10-16 13:46:04,517 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
      2015-10-16 13:46:04,532 INFO namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:

      I am using HDP 2.2.8.0-3150

      1. hdp-select status
        accumulo-client - None
        accumulo-gc - None
        accumulo-master - None
        accumulo-monitor - None
        accumulo-tablet - None
        accumulo-tracer - None
        falcon-client - None
        falcon-server - None
        flume-server - None
        hadoop-client - 2.2.8.0-3150
        hadoop-hdfs-datanode - 2.2.8.0-3150
        hadoop-hdfs-journalnode - 2.2.8.0-3150
        hadoop-hdfs-namenode - 2.2.8.0-3150
        hadoop-hdfs-nfs3 - 2.2.8.0-3150
        hadoop-hdfs-portmap - 2.2.8.0-3150
        hadoop-hdfs-secondarynamenode - 2.2.8.0-3150
        hadoop-httpfs - 2.2.8.0-3150
        hadoop-mapreduce-historyserver - 2.2.8.0-3150
        hadoop-yarn-nodemanager - 2.2.8.0-3150
        hadoop-yarn-resourcemanager - 2.2.8.0-3150
        hadoop-yarn-timelineserver - 2.2.8.0-3150
        hbase-client - None
        hbase-master - None
        hbase-regionserver - None
        hive-metastore - None
        hive-server2 - None
        hive-webhcat - None
        kafka-broker - None
        knox-server - None
        mahout-client - None
        oozie-client - None
        oozie-server - None
        phoenix-client - None
        ranger-admin - None
        ranger-usersync - None
        slider-client - None
        spark-client - None
        spark-historyserver - None
        sqoop-client - None
        sqoop-server - None
        storm-client - None
        storm-nimbus - None
        storm-slider-client - None
        storm-supervisor - None
        zookeeper-client - 2.2.8.0-3150
        zookeeper-server - 2.2.8.0-3150

      This issue does not exist when using 2.3.4.0-3011.

      *Steps to reproduce #1*

      1. Install Ambari 2.0.2 and HDP 2.2 (HDP-2.2.8.0)
      2. Upgrade Ambari to 2.1.3
      3. Restart HDFS
      4. Enable Kerberos
      5. See Failure

      *Steps to reproduce #2*

      1. Install Ambari 2.0.2 and HDP 2.2 (HDP-2.2.8.0)
      2. Enable Kerberos
      3. Upgrade Ambari to 2.1.3
      4. Restart HDFS
      5. See Failure

      *Cause*
      In `org.apache.ambari.server.upgrade.UpgradeCatalog210#updateHdfsConfigs`,
      `dfs.namenode.rpc-address` is set to be updated to the proper value. However
      the call to `org.apache.ambari.server.upgrade.AbstractUpgradeCatalog#updateCon
      figurationPropertiesForCluster(org.apache.ambari.server.state.Cluster,
      java.lang.String, java.util.Map<java.lang.String,java.lang.String>, boolean,
      boolean)` is made with the `updateIfExists` flag set to *false*. Before
      getting to this point, new configs have been added from the hdfs-site.xml file
      via `org.apache.ambari.server.upgrade.AbstractUpgradeCatalog#addNewConfigurati
      onsFromXml`. This added `dfs.namenode.rpc-address` to the hdfs-site config
      with the value of "localhost:8020" and thus the calculated (correct) value was
      ignored.

      *Solution*
      Change the call to `org.apache.ambari.server.upgrade.AbstractUpgradeCatalog#up
      dateConfigurationPropertiesForCluster(org.apache.ambari.server.state.Cluster,
      java.lang.String, java.util.Map<java.lang.String,java.lang.String>, boolean,
      boolean)` so that it is made with the `updateIfExists` flag set to *true*.

      Attachments

        Issue Links

          Activity

            People

              aonishuk Andrew Onischuk
              aonishuk Andrew Onischuk
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: