Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-9789

Support server advertised kerberos principals

    Details

    • Type: New Feature
    • Status: Closed
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 2.0.0-alpha, 3.0.0-alpha1
    • Fix Version/s: 2.1.1-beta
    • Component/s: ipc, security
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      The RPC client currently constructs the kerberos principal based on the a config value, usually with an _HOST substitution. This means the service principal must match the hostname the client is using to connect. This causes problems:

      • Prevents using HA with IP failover when the servers have distinct principals from the failover hostname
      • Prevents clients from being able to access a service bound to multiple interfaces. Only the interface that matches the server's principal may be used.

      The client should be able to use the SASL advertised principal (HADOOP-9698), with appropriate safeguards, to acquire the correct service ticket.

      1. HADOOP-9789.2.patch
        0.9 kB
        Daryn Sharp
      2. HADOOP-9789.patch
        16 kB
        Daryn Sharp
      3. HADOOP-9789.patch
        13 kB
        Daryn Sharp
      4. hadoop-ojoshi-datanode-HW10351.local.log
        344 kB
        Omkar Vinit Joshi
      5. hadoop-ojoshi-namenode-HW10351.local.log
        194 kB
        Omkar Vinit Joshi

        Issue Links

          Activity

          Hide
          daryn Daryn Sharp added a comment -

          Patch continues to use the KerberosInfo annotated principal (usually with _HOST) in the conf. However the client will first look for another key which has the suffix "-pattern". If found, the advertised principal must match the given pattern. This allows for very lax to strict constraints on a server advertised principal.

          There's a battery of unit tests that demonstrate the behavior.

          Note: I think there might be an existing unintended problem with auth_to_local rewrite rules being applied to the principal. If so, the server may require a 1-2 line change.

          Show
          daryn Daryn Sharp added a comment - Patch continues to use the KerberosInfo annotated principal (usually with _HOST) in the conf. However the client will first look for another key which has the suffix "-pattern". If found, the advertised principal must match the given pattern. This allows for very lax to strict constraints on a server advertised principal. There's a battery of unit tests that demonstrate the behavior. Note: I think there might be an existing unintended problem with auth_to_local rewrite rules being applied to the principal. If so, the server may require a 1-2 line change.
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12594805/HADOOP-9789.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2870//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2870//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12594805/HADOOP-9789.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2870//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2870//console This message is automatically generated.
          Hide
          daryn Daryn Sharp added a comment -

          Changed key suffix to .pattern instead of -pattern.

          Changed server to not use KerberosName which may do auth_to_local rewrite. The server's SPN is always the authentication principal in the UGI.

          Show
          daryn Daryn Sharp added a comment - Changed key suffix to .pattern instead of -pattern. Changed server to not use KerberosName which may do auth_to_local rewrite. The server's SPN is always the authentication principal in the UGI.
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12595196/HADOOP-9789.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2894//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2894//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12595196/HADOOP-9789.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2894//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2894//console This message is automatically generated.
          Hide
          kihwal Kihwal Lee added a comment -

          1. Don't we need to support per name service SPN pattern for namenode rpc? If a client is talking to multiple name services, it might need to use a separate pattern for each name service.

          For viewfs and HA config, while the server side utilizes NameNode.initializeGenericKeys() to set up a conf, the client side uses DFSUtil or HAUtil to extract certain keys. In order to support per name service SPN pattern, the client code needs to do something equivalent to what the server side is doing or obtain the value and explicitly set this variable before creating an RPC proxy. In either case, it needs to be in NameNode.NAMESERVICE_SPECIFIC_KEYS or NameNode.NAMENODE_SPECIFIC_KEYS. If there was only the HA client, we might do it in failover proxy implementations, but to support viewfs, more generic solution will be better.

          If you agree, please file an HDFS jira to address this.

          2. I am sure this is the case, but want to double check because it is critical for the security. When the conf for the SPN contains "_HOST" and there is no pattern, the comparison will be done against the same SPN that would have been used in the client pre-patch. serverAddr comes from ConnectionId used for creating the Connection instance and it is from a conf, not something that can be dynamically updated using external services (e.g. Connection.server). This address is used for "_HOST" substitution, so I think it is safe.

          +1 pending your confirmation that 2) is true.

          Show
          kihwal Kihwal Lee added a comment - 1. Don't we need to support per name service SPN pattern for namenode rpc? If a client is talking to multiple name services, it might need to use a separate pattern for each name service. For viewfs and HA config, while the server side utilizes NameNode.initializeGenericKeys() to set up a conf, the client side uses DFSUtil or HAUtil to extract certain keys. In order to support per name service SPN pattern, the client code needs to do something equivalent to what the server side is doing or obtain the value and explicitly set this variable before creating an RPC proxy. In either case, it needs to be in NameNode.NAMESERVICE_SPECIFIC_KEYS or NameNode.NAMENODE_SPECIFIC_KEYS. If there was only the HA client, we might do it in failover proxy implementations, but to support viewfs, more generic solution will be better. If you agree, please file an HDFS jira to address this. 2. I am sure this is the case, but want to double check because it is critical for the security. When the conf for the SPN contains "_HOST" and there is no pattern, the comparison will be done against the same SPN that would have been used in the client pre-patch. serverAddr comes from ConnectionId used for creating the Connection instance and it is from a conf, not something that can be dynamically updated using external services (e.g. Connection.server). This address is used for "_HOST" substitution, so I think it is safe. +1 pending your confirmation that 2) is true.
          Hide
          daryn Daryn Sharp added a comment -

          If I understand the suggestion, per-NN SPN patterns requires conf updates every time a new NN is "HA enabled" which kind of defeats the goal of not managing conf changes. Then you have to contemplate do you key on the IP, the given hostname, its canonicalized hostname, etc. I envision it being set to something like "hdfs/*-nn?.domain@REALM".

          As for #2, in the absence of a SPN pattern key, it will do exactly what it did before.

          Show
          daryn Daryn Sharp added a comment - If I understand the suggestion, per-NN SPN patterns requires conf updates every time a new NN is "HA enabled" which kind of defeats the goal of not managing conf changes. Then you have to contemplate do you key on the IP, the given hostname, its canonicalized hostname, etc. I envision it being set to something like "hdfs/*-nn?.domain@REALM". As for #2, in the absence of a SPN pattern key, it will do exactly what it did before.
          Hide
          kihwal Kihwal Lee added a comment -

          I think it will useful to support per name space pattern. Per-nn pattern is pointless as you said.

          +1 the patch looks good.

          Show
          kihwal Kihwal Lee added a comment - I think it will useful to support per name space pattern. Per-nn pattern is pointless as you said. +1 the patch looks good.
          Hide
          daryn Daryn Sharp added a comment -

          Thanks Kihwal! Committed to trunk/2/2.1

          Show
          daryn Daryn Sharp added a comment - Thanks Kihwal! Committed to trunk/2/2.1
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-trunk-Commit #4235 (See https://builds.apache.org/job/Hadoop-trunk-Commit/4235/)
          HADOOP-9789. Support server advertised kerberos principals (daryn) (daryn: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512380)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #4235 (See https://builds.apache.org/job/Hadoop-trunk-Commit/4235/ ) HADOOP-9789 . Support server advertised kerberos principals (daryn) (daryn: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512380 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #297 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/297/)
          HADOOP-9789. Support server advertised kerberos principals (daryn) (daryn: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512380)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #297 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/297/ ) HADOOP-9789 . Support server advertised kerberos principals (daryn) (daryn: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512380 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #1487 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1487/)
          HADOOP-9789. Support server advertised kerberos principals (daryn) (daryn: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512380)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1487 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1487/ ) HADOOP-9789 . Support server advertised kerberos principals (daryn) (daryn: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512380 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1514 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1514/)
          HADOOP-9789. Support server advertised kerberos principals (daryn) (daryn: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512380)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1514 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1514/ ) HADOOP-9789 . Support server advertised kerberos principals (daryn) (daryn: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512380 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          Hide
          ojoshi Omkar Vinit Joshi added a comment -

          Hi Daryn,

          The secured cluster is breaking after this patch. I reverted this patch on local secured cluster and it works. I am attaching the logs for hdfs/RM/NM. Thanks.

          Show
          ojoshi Omkar Vinit Joshi added a comment - Hi Daryn, The secured cluster is breaking after this patch. I reverted this patch on local secured cluster and it works. I am attaching the logs for hdfs/RM/NM. Thanks.
          Hide
          ojoshi Omkar Vinit Joshi added a comment -

          Hi Daryn,

          I found a difference between older and newer code in terms of serverId. Is it expected this way?

          • Location :- org.apache.hadoop.security.SaslRpcServer.java
            • Kerberos principal name is hdfs/localhost@LOCALHOST
            • older :-
              • protocol :- hdfs
              • serverId :- localhost
            • newer :-
              • protocol :- hdfs
              • serverId :- localhost@LOCALHOST (why it is instance@realm now?)
                can you please give me more information and let me know if I am missing anything.
                I am using below hadoop.security.auth_to_local config in core-site.xml
                hadoop.security.auth_to_local : "RULE:[1:$1@$0](.@LOCALHOST)s/@.//DEFAULT"
          Show
          ojoshi Omkar Vinit Joshi added a comment - Hi Daryn, I found a difference between older and newer code in terms of serverId. Is it expected this way? Location :- org.apache.hadoop.security.SaslRpcServer.java Kerberos principal name is hdfs/localhost@LOCALHOST older :- protocol :- hdfs serverId :- localhost newer :- protocol :- hdfs serverId :- localhost@LOCALHOST (why it is instance@realm now?) can you please give me more information and let me know if I am missing anything. I am using below hadoop.security.auth_to_local config in core-site.xml hadoop.security.auth_to_local : "RULE: [1:$1@$0] (. @LOCALHOST)s/@. //DEFAULT"
          Hide
          ojoshi Omkar Vinit Joshi added a comment -

          If I make serverId = new KerberosName(UserGroupInformation.getCurrentUser().getUserName()).getHostName() (i.e. older code ..it works.. )

          String[] parts = fullName.split("[/@]", 2);

          Is there problem with this?
          if PrincipalName :- hdfs/localhost@LOCALHOST then what do we expect for serverId ? [locahost] [localhost@LOCALHOST] ?

          Show
          ojoshi Omkar Vinit Joshi added a comment - If I make serverId = new KerberosName(UserGroupInformation.getCurrentUser().getUserName()).getHostName() (i.e. older code ..it works.. ) String[] parts = fullName.split(" [/@] ", 2); Is there problem with this? if PrincipalName :- hdfs/localhost@LOCALHOST then what do we expect for serverId ? [locahost] [localhost@LOCALHOST] ?
          Hide
          ojoshi Omkar Vinit Joshi added a comment -

          Reopening as it is failing in trunk.

          Show
          ojoshi Omkar Vinit Joshi added a comment - Reopening as it is failing in trunk.
          Hide
          daryn Daryn Sharp added a comment -

          Yes, apologies, I'm already aware of the problem. The split needs to be for "3", not "2". My day has been blocked off, so I was/am going to file a jira after our QA verified that one-line change.

          Show
          daryn Daryn Sharp added a comment - Yes, apologies, I'm already aware of the problem. The split needs to be for "3", not "2". My day has been blocked off, so I was/am going to file a jira after our QA verified that one-line change.
          Hide
          daryn Daryn Sharp added a comment -

          We should probably put this on a different jira, but this is the fix.

          Show
          daryn Daryn Sharp added a comment - We should probably put this on a different jira, but this is the fix.
          Hide
          daryn Daryn Sharp added a comment -

          Will be fixed by HADOOP-9868.

          Show
          daryn Daryn Sharp added a comment - Will be fixed by HADOOP-9868 .
          Hide
          ojoshi Omkar Vinit Joshi added a comment -

          Daryn Sharp sorry didn't get time yesterday to check the latest patch ..I will try on my local secured cluster and let you know..Thanks for fixing it.

          Show
          ojoshi Omkar Vinit Joshi added a comment - Daryn Sharp sorry didn't get time yesterday to check the latest patch ..I will try on my local secured cluster and let you know..Thanks for fixing it.
          Hide
          tucu00 Alejandro Abdelnur added a comment -

          Crossposting my comment in HADOOP-9868,

          I'm a bit puzzled by this HADOOP-9789. While I understand the reasoning for it, doesn't that weaken security? An impersonator can publish an alternate principal for which it has a keytab for.

          Show
          tucu00 Alejandro Abdelnur added a comment - Crossposting my comment in HADOOP-9868 , I'm a bit puzzled by this HADOOP-9789 . While I understand the reasoning for it, doesn't that weaken security? An impersonator can publish an alternate principal for which it has a keytab for.

            People

            • Assignee:
              daryn Daryn Sharp
              Reporter:
              daryn Daryn Sharp
            • Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development