Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-3466

The SPNEGO filter for the NameNode should come out of the web keytab file

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 1.1.0, 2.0.0-alpha
    • Fix Version/s: 1.1.0, 2.0.2-alpha
    • Component/s: namenode, security
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Currently, the spnego filter uses the DFS_NAMENODE_KEYTAB_FILE_KEY to find the keytab. It should use the DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY to do it.

      1. hdfs-3466-b1.patch
        0.8 kB
        Owen O'Malley
      2. hdfs-3466-b1-2.patch
        1 kB
        Owen O'Malley
      3. hdfs-3466-trunk-2.patch
        1 kB
        Owen O'Malley
      4. hdfs-3466-trunk-3.patch
        1 kB
        Owen O'Malley

        Activity

        Owen O'Malley created issue -
        Hide
        Owen O'Malley added a comment -

        This patch changes the config key.

        Show
        Owen O'Malley added a comment - This patch changes the config key.
        Owen O'Malley made changes -
        Field Original Value New Value
        Attachment hdfs-3466-b1.patch [ 12529806 ]
        Attachment hdfs-3466-trunk.patch [ 12529807 ]
        Owen O'Malley made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12529807/hdfs-3466-trunk.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2520//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2520//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12529807/hdfs-3466-trunk.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2520//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2520//console This message is automatically generated.
        Hide
        Alejandro Abdelnur added a comment -

        It is an unnecessary flexibility (adding extra complexity) to have 2 keys for a keytab. IMO, we should consolidate both keys in one instead.

        Show
        Alejandro Abdelnur added a comment - It is an unnecessary flexibility (adding extra complexity) to have 2 keys for a keytab. IMO, we should consolidate both keys in one instead.
        Hide
        Alejandro Abdelnur added a comment -

        Also, it does not seem correct using the webhdfs property key for NN checkpoint configuration.

        Show
        Alejandro Abdelnur added a comment - Also, it does not seem correct using the webhdfs property key for NN checkpoint configuration.
        Hide
        Arpit Gupta added a comment -

        It is an unnecessary flexibility (adding extra complexity) to have 2 keys for a keytab. IMO, we should consolidate both keys in one instead.

        Multiple keytabs are needed when you have multiple services needing access to HTTP principal. For example if oozie will run on the same node as the namenode, then rather than sharing one keytab that has hdfs, oozie and HTTP principals in one keytab you can create a different keytab which just has HTTP principal.

        We need to do this because as soon as you add the same principal do a different keytab, earlier keytabs become invalidated.

        Show
        Arpit Gupta added a comment - It is an unnecessary flexibility (adding extra complexity) to have 2 keys for a keytab. IMO, we should consolidate both keys in one instead. Multiple keytabs are needed when you have multiple services needing access to HTTP principal. For example if oozie will run on the same node as the namenode, then rather than sharing one keytab that has hdfs, oozie and HTTP principals in one keytab you can create a different keytab which just has HTTP principal. We need to do this because as soon as you add the same principal do a different keytab, earlier keytabs become invalidated.
        Hide
        Aaron T. Myers added a comment -

        We need to do this because as soon as you add the same principal do a different keytab, earlier keytabs become invalidated.

        That's not necessarily true. It's true by default with MIT kerberos, but it doesn't have to be the case. You can either use the '-norandkey' option when creating the second keytab in kadmin, or you can use ktutil to create a keytab totally offline (i.e. independently of the kadmin server) which contains whatever entries you want from several already-created keytabs.

        Show
        Aaron T. Myers added a comment - We need to do this because as soon as you add the same principal do a different keytab, earlier keytabs become invalidated. That's not necessarily true. It's true by default with MIT kerberos, but it doesn't have to be the case. You can either use the '-norandkey' option when creating the second keytab in kadmin, or you can use ktutil to create a keytab totally offline (i.e. independently of the kadmin server) which contains whatever entries you want from several already-created keytabs.
        Hide
        Arpit Gupta added a comment -

        @Aaron

        Thanks for the info. I wil try these out.

        Show
        Arpit Gupta added a comment - @Aaron Thanks for the info. I wil try these out.
        Hide
        Aaron T. Myers added a comment -

        Happy to help.

        If these work for you, would you agree with Tucu's assertion that having the ability to configure multiple keytabs is an unnecessary feature?

        Show
        Aaron T. Myers added a comment - Happy to help. If these work for you, would you agree with Tucu's assertion that having the ability to configure multiple keytabs is an unnecessary feature?
        Hide
        Arpit Gupta added a comment -

        @Aaron

        I still feel that this flexibility is good to have. The users would have to keep track of if any keytab was generated for a given principal to know when to use the '-norandkey' option. To me this makes it easier to manage keytabs and principals.

        Show
        Arpit Gupta added a comment - @Aaron I still feel that this flexibility is good to have. The users would have to keep track of if any keytab was generated for a given principal to know when to use the '-norandkey' option. To me this makes it easier to manage keytabs and principals.
        Hide
        Alejandro Abdelnur added a comment -

        @Arpit, if you want to keep a separate configuration property for SPNEGO keytab, it should not be called DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY but SPNEGO_AUTHENTICATION_KERBEROS_KEYTAB_KEY.

        Show
        Alejandro Abdelnur added a comment - @Arpit, if you want to keep a separate configuration property for SPNEGO keytab, it should not be called DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY but SPNEGO_AUTHENTICATION_KERBEROS_KEYTAB_KEY.
        Hide
        Aaron T. Myers added a comment -

        I still feel that this flexibility is good to have. The users would have to keep track of if any keytab was generated for a given principal to know when to use the '-norandkey' option. To me this makes it easier to manage keytabs and principals.

        They're still going to have to know to do this even with separate configuration options, since the user might try to export a new keytab for the HTTP/... principal without knowing that they've already done so for a different service. I don't see how having two separate configuration options makes things easier.


        If we go forward with this, then I think we should not require the two separate configuration options. In the current patch, the user would have to set both DFS_NAMENODE_KEYTAB_FILE_KEY and DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY even if entries for both principals were contained in a single keytab. We should make NameNodeHttpServer try DFS_NAMENODE_KEYTAB_FILE_KEY first, and then fall back on trying DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY if the first one does not contain an entry for the appropriate principal.

        Show
        Aaron T. Myers added a comment - I still feel that this flexibility is good to have. The users would have to keep track of if any keytab was generated for a given principal to know when to use the '-norandkey' option. To me this makes it easier to manage keytabs and principals. They're still going to have to know to do this even with separate configuration options, since the user might try to export a new keytab for the HTTP/... principal without knowing that they've already done so for a different service. I don't see how having two separate configuration options makes things easier. If we go forward with this, then I think we should not require the two separate configuration options. In the current patch, the user would have to set both DFS_NAMENODE_KEYTAB_FILE_KEY and DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY even if entries for both principals were contained in a single keytab. We should make NameNodeHttpServer try DFS_NAMENODE_KEYTAB_FILE_KEY first, and then fall back on trying DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY if the first one does not contain an entry for the appropriate principal.
        Hide
        Arpit Gupta added a comment -

        @Aaron

        The reason i think it will be easier is because if there is a separate file for HTTP principal for a give host, even if you regenerate the keytab for that host the latest keytab will still be valid for that host so the user would not have to know.

        Regarding whether to change the keyname as @Alejandro suggested or fall back if the key is not found this got changed when HDFS-2617 was committed. Before that trunk used DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY key for the keytab for the HTTP principal. Branch 1.0 still uses the same key. So the users that are already using webhdfs on a secure cluster will have to change their configs if we change the config keys.

        Show
        Arpit Gupta added a comment - @Aaron The reason i think it will be easier is because if there is a separate file for HTTP principal for a give host, even if you regenerate the keytab for that host the latest keytab will still be valid for that host so the user would not have to know. Regarding whether to change the keyname as @Alejandro suggested or fall back if the key is not found this got changed when HDFS-2617 was committed. Before that trunk used DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY key for the keytab for the HTTP principal. Branch 1.0 still uses the same key. So the users that are already using webhdfs on a secure cluster will have to change their configs if we change the config keys.
        Hide
        Alejandro Abdelnur added a comment -

        regarding key change, we can deprecate the DFS_WEB_ one

        Show
        Alejandro Abdelnur added a comment - regarding key change, we can deprecate the DFS_WEB_ one
        Hide
        Aaron T. Myers added a comment -

        The reason i think it will be easier is because if there is a separate file for HTTP principal for a give host, even if you regenerate the keytab for that host the latest keytab will still be valid for that host so the user would not have to know.

        But they'd have to know the location(s) where the old (now invalid) keytab was, and be sure to replace it with the new one. My point is that, either way, the user is going to have to know that they've already exported a keytab for a given principal. No way around that.

        Regarding whether to change the keyname as @Alejandro suggested or fall back if the key is not found this got changed when HDFS-2617 was committed. Before that trunk used DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY key for the keytab for the HTTP principal. Branch 1.0 still uses the same key. So the users that are already using webhdfs on a secure cluster will have to change their configs if we change the config keys.

        Whether we change the config key is orthogonal to whether or not we check multiple configuration keys to find the keytab. The concern I have is that those users who don't use WebHDFS shouldn't have to set multiple keytab location config options if their principals are in a single keytab.

        Show
        Aaron T. Myers added a comment - The reason i think it will be easier is because if there is a separate file for HTTP principal for a give host, even if you regenerate the keytab for that host the latest keytab will still be valid for that host so the user would not have to know. But they'd have to know the location(s) where the old (now invalid) keytab was, and be sure to replace it with the new one. My point is that, either way, the user is going to have to know that they've already exported a keytab for a given principal. No way around that. Regarding whether to change the keyname as @Alejandro suggested or fall back if the key is not found this got changed when HDFS-2617 was committed. Before that trunk used DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY key for the keytab for the HTTP principal. Branch 1.0 still uses the same key. So the users that are already using webhdfs on a secure cluster will have to change their configs if we change the config keys. Whether we change the config key is orthogonal to whether or not we check multiple configuration keys to find the keytab. The concern I have is that those users who don't use WebHDFS shouldn't have to set multiple keytab location config options if their principals are in a single keytab.
        Hide
        Owen O'Malley added a comment -

        It is good to allow the keytabs that need to be shared between services, such as spnego to be shared. At this point we can't change the name without a very good reason, since it will break compatibility with hadoop-1.0.X. We also need to fix the secondary namenode with exactly the same problem.

        Show
        Owen O'Malley added a comment - It is good to allow the keytabs that need to be shared between services, such as spnego to be shared. At this point we can't change the name without a very good reason, since it will break compatibility with hadoop-1.0.X. We also need to fix the secondary namenode with exactly the same problem.
        Hide
        Owen O'Malley added a comment -

        To continue on the point, anyone who has a keytab with the HTTP principal in it has been pointing to it using the DFS_WEB_AUTH config setting. There is no motivation for breaking their configs and this patch fixes the problem introduced by HDFS-2617.

        Show
        Owen O'Malley added a comment - To continue on the point, anyone who has a keytab with the HTTP principal in it has been pointing to it using the DFS_WEB_AUTH config setting. There is no motivation for breaking their configs and this patch fixes the problem introduced by HDFS-2617 .
        Hide
        Aaron T. Myers added a comment -

        It's fine by me to support folks who already have a keytab which contains only the HTTP principal, I just want to make sure we continue to support folks who are used to putting entries for both principals in the same keytab. So, let's make this code first check the config for DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY, and then fall back to using DFS_NAMENODE_KEYTAB_FILE_KEY if that's not set.

        Show
        Aaron T. Myers added a comment - It's fine by me to support folks who already have a keytab which contains only the HTTP principal, I just want to make sure we continue to support folks who are used to putting entries for both principals in the same keytab. So, let's make this code first check the config for DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY, and then fall back to using DFS_NAMENODE_KEYTAB_FILE_KEY if that's not set.
        Hide
        Owen O'Malley added a comment -

        Can't we take the simpler approach of just making the default value for DFS_WEB_AUTH be DFS_NAMENODE_KEYTAB?

        Show
        Owen O'Malley added a comment - Can't we take the simpler approach of just making the default value for DFS_WEB_AUTH be DFS_NAMENODE_KEYTAB?
        Hide
        Aaron T. Myers added a comment -

        Sure, that works for me, if you think it's simpler.

        Thanks for the understanding, Owen.

        Show
        Aaron T. Myers added a comment - Sure, that works for me, if you think it's simpler. Thanks for the understanding, Owen.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        +1 patch looks good.

        I will commit it unless someone still has questions on the patch.

        Show
        Tsz Wo Nicholas Sze added a comment - +1 patch looks good. I will commit it unless someone still has questions on the patch.
        Tsz Wo Nicholas Sze made changes -
        Hadoop Flags Reviewed [ 10343 ]
        Hide
        Aaron T. Myers added a comment -

        Nicholas, note that the currently-posted patch does not implement the solution that Owen and I discussed and agreed to.

        Show
        Aaron T. Myers added a comment - Nicholas, note that the currently-posted patch does not implement the solution that Owen and I discussed and agreed to.
        Hide
        Owen O'Malley added a comment -

        Here's a patch that incorporates Eli's feedback.

        Show
        Owen O'Malley added a comment - Here's a patch that incorporates Eli's feedback.
        Owen O'Malley made changes -
        Attachment hdfs-3466-b1-2.patch [ 12542856 ]
        Owen O'Malley made changes -
        Attachment hdfs-3466-trunk.patch [ 12542857 ]
        Owen O'Malley made changes -
        Attachment hdfs-3466-trunk.patch [ 12542857 ]
        Owen O'Malley made changes -
        Attachment hdfs-3466-trunk.patch [ 12529807 ]
        Owen O'Malley made changes -
        Attachment hdfs-3466-trunk-2.patch [ 12542858 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12542858/hdfs-3466-trunk-2.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        -1 javac. The patch appears to cause the build to fail.

        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3115//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12542858/hdfs-3466-trunk-2.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 javac. The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3115//console This message is automatically generated.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12542858/hdfs-3466-trunk-2.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        -1 javac. The patch appears to cause the build to fail.

        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3116//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12542858/hdfs-3466-trunk-2.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 javac. The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3116//console This message is automatically generated.
        Hide
        Eli Collins added a comment -

        Hey Owen, I think you meant to remove the 2nd initialization of httpKeytab.

        +        String httpKeytab = conf.get(
        +          DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY);
        +        if (httpKeytab == null) {
        +          httpKeytab = conf.get(DFSConfigKeys.DFS_NAMENODE_KEYTAB_FILE_KEY);
        +        }
                 String httpKeytab = conf
                   .get(DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY);
        
        Show
        Eli Collins added a comment - Hey Owen, I think you meant to remove the 2nd initialization of httpKeytab. + String httpKeytab = conf.get( + DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY); + if (httpKeytab == null ) { + httpKeytab = conf.get(DFSConfigKeys.DFS_NAMENODE_KEYTAB_FILE_KEY); + } String httpKeytab = conf .get(DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY);
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Aaron, thanks for reminding me. Will wait for Owen's new patches.

        Show
        Tsz Wo Nicholas Sze added a comment - Aaron, thanks for reminding me. Will wait for Owen's new patches.
        Hide
        Owen O'Malley added a comment -

        Thanks, for the catch of the bad merge. Here's the updated patch.

        Show
        Owen O'Malley added a comment - Thanks, for the catch of the bad merge. Here's the updated patch.
        Owen O'Malley made changes -
        Attachment hdfs-3466-trunk-3.patch [ 12543314 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12543314/hdfs-3466-trunk-3.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.TestHftpDelegationToken

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3133//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3133//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12543314/hdfs-3466-trunk-3.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestHftpDelegationToken +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3133//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3133//console This message is automatically generated.
        Hide
        Eli Collins added a comment -

        +1 lgtm

        Show
        Eli Collins added a comment - +1 lgtm
        Hide
        Owen O'Malley added a comment -

        I just committed this.

        Show
        Owen O'Malley added a comment - I just committed this.
        Owen O'Malley made changes -
        Status Patch Available [ 10002 ] Resolved [ 5 ]
        Fix Version/s 1.1.0 [ 12317959 ]
        Fix Version/s 2.1.0-alpha [ 12321440 ]
        Resolution Fixed [ 1 ]
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk-Commit #2732 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2732/)
        HDFS-3466. Get HTTP kerberos principal from the web authentication keytab.
        (omalley) (Revision 1379646)

        Result = SUCCESS
        omalley : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1379646
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #2732 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2732/ ) HDFS-3466 . Get HTTP kerberos principal from the web authentication keytab. (omalley) (Revision 1379646) Result = SUCCESS omalley : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1379646 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
        Hide
        Aaron T. Myers added a comment -

        Thanks a lot, Owen.

        Show
        Aaron T. Myers added a comment - Thanks a lot, Owen.
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Common-trunk-Commit #2669 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2669/)
        HDFS-3466. Get HTTP kerberos principal from the web authentication keytab.
        (omalley) (Revision 1379646)

        Result = SUCCESS
        omalley : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1379646
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
        Show
        Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #2669 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2669/ ) HDFS-3466 . Get HTTP kerberos principal from the web authentication keytab. (omalley) (Revision 1379646) Result = SUCCESS omalley : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1379646 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk-Commit #2694 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2694/)
        HDFS-3466. Get HTTP kerberos principal from the web authentication keytab.
        (omalley) (Revision 1379646)

        Result = FAILURE
        omalley : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1379646
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #2694 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2694/ ) HDFS-3466 . Get HTTP kerberos principal from the web authentication keytab. (omalley) (Revision 1379646) Result = FAILURE omalley : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1379646 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk #1152 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1152/)
        HDFS-3466. Get HTTP kerberos principal from the web authentication keytab.
        (omalley) (Revision 1379646)

        Result = SUCCESS
        omalley : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1379646
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1152 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1152/ ) HDFS-3466 . Get HTTP kerberos principal from the web authentication keytab. (omalley) (Revision 1379646) Result = SUCCESS omalley : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1379646 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk #1183 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1183/)
        HDFS-3466. Get HTTP kerberos principal from the web authentication keytab.
        (omalley) (Revision 1379646)

        Result = SUCCESS
        omalley : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1379646
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1183 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1183/ ) HDFS-3466 . Get HTTP kerberos principal from the web authentication keytab. (omalley) (Revision 1379646) Result = SUCCESS omalley : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1379646 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
        Arun C Murthy made changes -
        Fix Version/s 2.0.2-alpha [ 12322472 ]
        Fix Version/s 2.1.0-alpha [ 12321440 ]
        Arun C Murthy made changes -
        Status Resolved [ 5 ] Closed [ 6 ]
        Transition Time In Source Status Execution Times Last Executer Last Execution Date
        Open Open Patch Available Patch Available
        8m 38s 1 Owen O'Malley 25/May/12 23:03
        Patch Available Patch Available Resolved Resolved
        98d 39m 1 Owen O'Malley 31/Aug/12 23:43
        Resolved Resolved Closed Closed
        40d 19h 3m 1 Arun C Murthy 11/Oct/12 18:46

          People

          • Assignee:
            Owen O'Malley
            Reporter:
            Owen O'Malley
          • Votes:
            0 Vote for this issue
            Watchers:
            12 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development