Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-13119

Add ability to secure log servlet using proxy users

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.8.0, 2.7.4
    • Fix Version/s: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
    • Component/s: None
    • Labels:

      Description

      User Hadoop on secure mode.
      login as kdc user, kinit.
      start firefox and enable Kerberos
      access http://localhost:50070/logs/
      Get 403 authorization errors.
      only hdfs user could access logs.
      Would expect as a user to be able to web interface logs link.
      Same results if using curl:
      curl -v --negotiate -u tester: http://localhost:50070/logs/
      HTTP/1.1 403 User tester is unauthorized to access this page.
      so:
      1. either don't show links if hdfs user is able to access.
      2. provide mechanism to add users to web application realm.
      3. note that we are pass authentication so the issue is authorization to /logs/

      suspect that /logs/ path is secure in webdescriptor so suspect users by default don't have access to secure paths.

      1. screenshot-1.png
        43 kB
        Yuanbo Liu
      2. HADOOP-13119.005.patch
        21 kB
        Yuanbo Liu
      3. HADOOP-13119.005.patch
        21 kB
        Yuanbo Liu
      4. HADOOP-13119.004.patch
        20 kB
        Yuanbo Liu
      5. HADOOP-13119.003.patch
        19 kB
        Yuanbo Liu
      6. HADOOP-13119.002.patch
        22 kB
        Yuanbo Liu
      7. HADOOP-13119.001.patch
        24 kB
        Yuanbo Liu

        Issue Links

          Activity

          Hide
          templedf Daniel Templeton added a comment -

          I am able to replicate the issue on a secure cluster.

          Should this JIRA move to the HDFS project since it appears to be specifically a namenode UI issue?

          Show
          templedf Daniel Templeton added a comment - I am able to replicate the issue on a secure cluster. Should this JIRA move to the HDFS project since it appears to be specifically a namenode UI issue?
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Copying comment from Jeffrey E Rodriguez:

          This Jira should have been a HDFS Jira. I am closing since the solution is to set the property dfs.cluster.administrators which would allow access to /log to a group or user.

          Show
          arpitagarwal Arpit Agarwal added a comment - Copying comment from Jeffrey E Rodriguez : This Jira should have been a HDFS Jira. I am closing since the solution is to set the property dfs.cluster.administrators which would allow access to /log to a group or user.
          Hide
          eyang Eric Yang added a comment -

          Hi Arpit,

          There is no authentication done for /log servlet. If the log url is proxy through Knox or using SPNEGO for authentication. HDFS log url is passing through unfiltered. Reopen this because this problem is general in

          hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java

          addDefaultApps method supposed to honor http authentication method (SPNEGO or simple), and choose suitable filter accordingly. This part of code seems to be missing.

          Show
          eyang Eric Yang added a comment - Hi Arpit, There is no authentication done for /log servlet. If the log url is proxy through Knox or using SPNEGO for authentication. HDFS log url is passing through unfiltered. Reopen this because this problem is general in hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java addDefaultApps method supposed to honor http authentication method (SPNEGO or simple), and choose suitable filter accordingly. This part of code seems to be missing.
          Hide
          jeffreyr97 Jeffrey E Rodriguez added a comment - - edited

          Hi Eric you are right only /stacks, /logLevel, /metrics, /jmx, and /conf are set with SPNEGO authentication (through addServlet method).
          /logs access is just controlled by the HttpServer2.hasAdministratorAccess method but is not being set with SPNEGO filter.

          SPNEGO authentication is done through the SpnegoFilter which need to be configured to the correct Hadoop security class. hadoop.http.filter.initializers org.apache.hadoop.security.AuthenticationFilterInitializer.

          Why it was done this way? a design error? Just a bug?

          I would be curious about the opinion of the community.

          In my user case the access to /logs is through a proxy server (knox) so the end user accessing the logs is the remote user (knox).

          The user I would expect is the doAs user but since access to /logs servlet is not using SPNEGO there is not really a doAs (there is no authentication).

          I closed this since setting dfs.cluster.administrators gave me way to access the /logs. I am questioning now the use of "dfs.cluster.administrators" property.

          Show
          jeffreyr97 Jeffrey E Rodriguez added a comment - - edited Hi Eric you are right only /stacks, /logLevel, /metrics, /jmx, and /conf are set with SPNEGO authentication (through addServlet method). /logs access is just controlled by the HttpServer2.hasAdministratorAccess method but is not being set with SPNEGO filter. SPNEGO authentication is done through the SpnegoFilter which need to be configured to the correct Hadoop security class. hadoop.http.filter.initializers org.apache.hadoop.security.AuthenticationFilterInitializer. Why it was done this way? a design error? Just a bug? I would be curious about the opinion of the community. In my user case the access to /logs is through a proxy server (knox) so the end user accessing the logs is the remote user (knox). The user I would expect is the doAs user but since access to /logs servlet is not using SPNEGO there is not really a doAs (there is no authentication). I closed this since setting dfs.cluster.administrators gave me way to access the /logs. I am questioning now the use of "dfs.cluster.administrators" property.
          Hide
          yuanbo Yuanbo Liu added a comment -

          Hi, Eric
          I've taken over this JIRA, will fix it later.

          Show
          yuanbo Yuanbo Liu added a comment - Hi, Eric I've taken over this JIRA, will fix it later.
          Hide
          yuanbo Yuanbo Liu added a comment - - edited

          Jeffrey E Rodriguez/Eric Yang
          I've read through the implements of HttpServer2.java and some filters, here is my investigation.

          From the picture, we can see that /logs access is also controlled by SPENGO filter(the authentication in the filter chain is a SPENGO filter).
          HttpServer2#initSpnego is confusing because this method is not working and also is not the way SPENGO filter is added. The right steps of enabling SPENGO are here:

          hadoop.http.authentication.simple.anonymous.allowed    false
          hadoop.http.authentication.signature.secret.file       /etc/security/http_secret
          hadoop.http.authentication.type         kerberos
          hadoop.http.authentication.kerberos.keytab      /etc/security/keytabs/spnego.service.keytab
          hadoop.http.authentication.kerberos.principal   HTTP/_HOST@EXAMPLE.COM
          hadoop.http.filter.initializers org.apache.hadoop.security.AuthenticationFilterInitializer
          hadoop.http.authentication.cookie.domain       EXAMPLE.COM
          

          The SPENGO filter is added by the method HttpServer2#addFilter.

          Jeffrey E Rodriguez The reason why you cannot access /logs is that /logs doesn't only require authentication but also require authorization by default. And authorization is controlled by the the property dfs.cluster.administrators. The user knox succeeds in authentication but fails in authorization. Adding the user knox to dfs.cluster.administrators is an expected behavior because this configuration is used to control who can access the default servlets.
          On the other hand, I love the idea that make SPENGO filter support proxy user. Proxy user is a basic function in Hadoop, and SPENGO filter should support it. By the way, I need to apologize that I mix the concepts of proxy user and delegation filter in the internal discussion, they're quite different.

          To the conclusion, I propose:

          • Erasing HttpServer2#initSpnego. The code is useless and misleading.
          • Extending the feature of org.apache.hadoop.security.AuthenticationFilter and making SPENGO filter support proxy user by default.
          • Deleting the redundant filter NoCacheFilter (see the pic) in the WebAppContext, adding NoCacheFilter into the LogContext's filter chain.

          Zhijie Shen/Aaron T. Myers/Daryn Sharp/Vinod Kumar Vavilapalli, I tag you guys here since you contribute a lot of security filters in Hadoop.
          If you and people in the watching list have any thoughts about this JIRA, please let me know. Thanks in advance.

          Show
          yuanbo Yuanbo Liu added a comment - - edited Jeffrey E Rodriguez / Eric Yang I've read through the implements of HttpServer2.java and some filters, here is my investigation. From the picture, we can see that /logs access is also controlled by SPENGO filter(the authentication in the filter chain is a SPENGO filter). HttpServer2#initSpnego is confusing because this method is not working and also is not the way SPENGO filter is added. The right steps of enabling SPENGO are here: hadoop.http.authentication.simple.anonymous.allowed false hadoop.http.authentication.signature.secret.file /etc/security/http_secret hadoop.http.authentication.type kerberos hadoop.http.authentication.kerberos.keytab /etc/security/keytabs/spnego.service.keytab hadoop.http.authentication.kerberos.principal HTTP/_HOST@EXAMPLE.COM hadoop.http.filter.initializers org.apache.hadoop.security.AuthenticationFilterInitializer hadoop.http.authentication.cookie.domain EXAMPLE.COM The SPENGO filter is added by the method HttpServer2#addFilter . Jeffrey E Rodriguez The reason why you cannot access /logs is that /logs doesn't only require authentication but also require authorization by default. And authorization is controlled by the the property dfs.cluster.administrators . The user knox succeeds in authentication but fails in authorization. Adding the user knox to dfs.cluster.administrators is an expected behavior because this configuration is used to control who can access the default servlets. On the other hand, I love the idea that make SPENGO filter support proxy user. Proxy user is a basic function in Hadoop, and SPENGO filter should support it. By the way, I need to apologize that I mix the concepts of proxy user and delegation filter in the internal discussion, they're quite different. To the conclusion, I propose: Erasing HttpServer2#initSpnego . The code is useless and misleading. Extending the feature of org.apache.hadoop.security.AuthenticationFilter and making SPENGO filter support proxy user by default. Deleting the redundant filter NoCacheFilter (see the pic) in the WebAppContext, adding NoCacheFilter into the LogContext's filter chain. Zhijie Shen / Aaron T. Myers / Daryn Sharp / Vinod Kumar Vavilapalli , I tag you guys here since you contribute a lot of security filters in Hadoop. If you and people in the watching list have any thoughts about this JIRA, please let me know. Thanks in advance.
          Hide
          Wancy Shi Wang added a comment -

          Hi Yuanbo Liu,

          Your proposes looks good for me, for the second point, there is DelegationTokenAuthenticationFilter that extends AuthenticationFilter and supports proxy user.
          From my opinion, either we can add proxy user support directly in AuthenticationFilter or use the existing DelegationTokenAuthenticationFilter.
          It seems adding directly in AuthenticationFilter is more straight forward and touches less files, but need to verify it is harmless and make sense to add it here.
          To use the existing code in DelegationTokenAuthenticationFilter, need to have an authenticationfilterinitializer to add DelegationTokenAuthenticationFilter in the filterchain.
          Because yarn is using RMAuthenticationFilterInitializer to support delegation token authentication and proxy user, we may apply it to hadoop common.
          And by configuring hadoop.http.filter.initializers to self defined authenticationfilterinitializer, we can add filters as needed.

          Show
          Wancy Shi Wang added a comment - Hi Yuanbo Liu , Your proposes looks good for me, for the second point, there is DelegationTokenAuthenticationFilter that extends AuthenticationFilter and supports proxy user. From my opinion, either we can add proxy user support directly in AuthenticationFilter or use the existing DelegationTokenAuthenticationFilter. It seems adding directly in AuthenticationFilter is more straight forward and touches less files, but need to verify it is harmless and make sense to add it here. To use the existing code in DelegationTokenAuthenticationFilter, need to have an authenticationfilterinitializer to add DelegationTokenAuthenticationFilter in the filterchain. Because yarn is using RMAuthenticationFilterInitializer to support delegation token authentication and proxy user, we may apply it to hadoop common. And by configuring hadoop.http.filter.initializers to self defined authenticationfilterinitializer, we can add filters as needed.
          Hide
          yuanbo Yuanbo Liu added a comment - - edited

          Shi Wang
          Thanks for your response.
          I have two concerns about using delegation token initializer:

          • delegation filter and SPENGO filter are different, using delegation filter which supports proxy user will change url rules and the way you request those urls. I believe it will bring a lot of code changes in Knox since the current code is based on SPENGO filter, right?
          • delegation filter and SPENGO filter cannot coexist. If we replace SPENGO initializer with delegation initializer, it will bring incompatibility issue in some downstream components because of such piece of code here:
            if (initializer.getName().equals(
                  AuthenticationFilterInitializer.class.getName())) {
                  hasHadoopAuthFilterInitializer = true;
            }
            

          Thus, I'd prefer extending SPENGO filter and make it support proxy user.

          Show
          yuanbo Yuanbo Liu added a comment - - edited Shi Wang Thanks for your response. I have two concerns about using delegation token initializer: delegation filter and SPENGO filter are different, using delegation filter which supports proxy user will change url rules and the way you request those urls. I believe it will bring a lot of code changes in Knox since the current code is based on SPENGO filter, right? delegation filter and SPENGO filter cannot coexist. If we replace SPENGO initializer with delegation initializer, it will bring incompatibility issue in some downstream components because of such piece of code here: if (initializer.getName().equals( AuthenticationFilterInitializer.class.getName())) { hasHadoopAuthFilterInitializer = true ; } Thus, I'd prefer extending SPENGO filter and make it support proxy user.
          Hide
          yuanbo Yuanbo Liu added a comment -

          Upload first patch for this issue. Any comment will be welcome.

          Show
          yuanbo Yuanbo Liu added a comment - Upload first patch for this issue. Any comment will be welcome.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 17s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 7m 8s trunk passed
          +1 compile 10m 59s trunk passed
          +1 checkstyle 0m 34s trunk passed
          +1 mvnsite 1m 8s trunk passed
          +1 mvneclipse 0m 22s trunk passed
          +1 findbugs 1m 30s trunk passed
          +1 javadoc 0m 53s trunk passed
          +1 mvninstall 0m 36s the patch passed
          +1 compile 9m 20s the patch passed
          +1 javac 9m 20s the patch passed
          -0 checkstyle 0m 34s hadoop-common-project/hadoop-common: The patch generated 1 new + 54 unchanged - 4 fixed = 55 total (was 58)
          +1 mvnsite 1m 5s the patch passed
          +1 mvneclipse 0m 22s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          -1 findbugs 1m 39s hadoop-common-project/hadoop-common generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0)
          +1 javadoc 0m 51s the patch passed
          -1 unit 7m 54s hadoop-common in the patch failed.
          +1 asflicense 0m 39s The patch does not generate ASF License warnings.
          48m 0s



          Reason Tests
          FindBugs module:hadoop-common-project/hadoop-common
            Unread field:HttpServer2.java:[line 273]
            Unread field:HttpServer2.java:[line 159]
            Unread field:HttpServer2.java:[line 268]
          Failed junit tests hadoop.log.TestLogLevel



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:e809691
          JIRA Issue HADOOP-13119
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12837984/HADOOP-13119.001.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux d6c126bb11ab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 026b39a
          Default Java 1.8.0_101
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 7m 8s trunk passed +1 compile 10m 59s trunk passed +1 checkstyle 0m 34s trunk passed +1 mvnsite 1m 8s trunk passed +1 mvneclipse 0m 22s trunk passed +1 findbugs 1m 30s trunk passed +1 javadoc 0m 53s trunk passed +1 mvninstall 0m 36s the patch passed +1 compile 9m 20s the patch passed +1 javac 9m 20s the patch passed -0 checkstyle 0m 34s hadoop-common-project/hadoop-common: The patch generated 1 new + 54 unchanged - 4 fixed = 55 total (was 58) +1 mvnsite 1m 5s the patch passed +1 mvneclipse 0m 22s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. -1 findbugs 1m 39s hadoop-common-project/hadoop-common generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) +1 javadoc 0m 51s the patch passed -1 unit 7m 54s hadoop-common in the patch failed. +1 asflicense 0m 39s The patch does not generate ASF License warnings. 48m 0s Reason Tests FindBugs module:hadoop-common-project/hadoop-common   Unread field:HttpServer2.java: [line 273]   Unread field:HttpServer2.java: [line 159]   Unread field:HttpServer2.java: [line 268] Failed junit tests hadoop.log.TestLogLevel Subsystem Report/Notes Docker Image:yetus/hadoop:e809691 JIRA Issue HADOOP-13119 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12837984/HADOOP-13119.001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d6c126bb11ab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 026b39a Default Java 1.8.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11020/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          yuanbo Yuanbo Liu added a comment - - edited

          Deleting HttpServer2#initSpnego will cause come findbugs issues and test failures. It's not worthy of doing it in this JIRA. But I still recommend to delete HttpServer2#initSpnego, it's misleading and not working. Maybe I will file another JIRA to discuss it.

          Upload v2 patch to address code style issue.

          Show
          yuanbo Yuanbo Liu added a comment - - edited Deleting HttpServer2#initSpnego will cause come findbugs issues and test failures. It's not worthy of doing it in this JIRA. But I still recommend to delete HttpServer2#initSpnego , it's misleading and not working. Maybe I will file another JIRA to discuss it. Upload v2 patch to address code style issue.
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 6m 51s trunk passed
          +1 compile 9m 36s trunk passed
          +1 checkstyle 0m 28s trunk passed
          +1 mvnsite 1m 2s trunk passed
          +1 mvneclipse 0m 18s trunk passed
          +1 findbugs 1m 23s trunk passed
          +1 javadoc 0m 48s trunk passed
          +1 mvninstall 0m 37s the patch passed
          +1 compile 9m 7s the patch passed
          +1 javac 9m 7s the patch passed
          +1 checkstyle 0m 29s hadoop-common-project/hadoop-common: The patch generated 0 new + 5 unchanged - 4 fixed = 5 total (was 9)
          +1 mvnsite 1m 0s the patch passed
          +1 mvneclipse 0m 18s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 31s the patch passed
          +1 javadoc 0m 47s the patch passed
          +1 unit 8m 25s hadoop-common in the patch passed.
          +1 asflicense 0m 32s The patch does not generate ASF License warnings.
          45m 17s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13119
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12838704/HADOOP-13119.002.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux b5b7762a3f6c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 96f4392
          Default Java 1.8.0_101
          findbugs v3.0.0
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11055/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11055/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 6m 51s trunk passed +1 compile 9m 36s trunk passed +1 checkstyle 0m 28s trunk passed +1 mvnsite 1m 2s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 1m 23s trunk passed +1 javadoc 0m 48s trunk passed +1 mvninstall 0m 37s the patch passed +1 compile 9m 7s the patch passed +1 javac 9m 7s the patch passed +1 checkstyle 0m 29s hadoop-common-project/hadoop-common: The patch generated 0 new + 5 unchanged - 4 fixed = 5 total (was 9) +1 mvnsite 1m 0s the patch passed +1 mvneclipse 0m 18s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 31s the patch passed +1 javadoc 0m 47s the patch passed +1 unit 8m 25s hadoop-common in the patch passed. +1 asflicense 0m 32s The patch does not generate ASF License warnings. 45m 17s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13119 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12838704/HADOOP-13119.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux b5b7762a3f6c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 96f4392 Default Java 1.8.0_101 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11055/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11055/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          yuanbo Yuanbo Liu added a comment -

          Eric Yang/Xiao Chen/Mingliang Liu Sorry to interrupt. Would you mind taking a look at this issue and give some thoughts? Thanks in advance!

          Show
          yuanbo Yuanbo Liu added a comment - Eric Yang / Xiao Chen / Mingliang Liu Sorry to interrupt. Would you mind taking a look at this issue and give some thoughts? Thanks in advance!
          Hide
          eyang Eric Yang added a comment -

          Hi Yuanbo Liu,

          hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java still exists in Hadoop. Therefore, it is best to keep hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java unit test case. This ensures that existing code still have a test case. If any changes made to AuthenticationFilter class, there is still some tests to guard against breakage.

          Show
          eyang Eric Yang added a comment - Hi Yuanbo Liu , hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java still exists in Hadoop. Therefore, it is best to keep hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java unit test case. This ensures that existing code still have a test case. If any changes made to AuthenticationFilter class, there is still some tests to guard against breakage.
          Hide
          yuanbo Yuanbo Liu added a comment -

          Eric Yang Thanks for your comments

          there is still some tests to guard against breakage.

          Make sense to me, I'll update my patch asap.

          Show
          yuanbo Yuanbo Liu added a comment - Eric Yang Thanks for your comments there is still some tests to guard against breakage. Make sense to me, I'll update my patch asap.
          Hide
          yuanbo Yuanbo Liu added a comment -

          upload v3 patch for this JIRA.

          Show
          yuanbo Yuanbo Liu added a comment - upload v3 patch for this JIRA.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 13s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 13m 38s trunk passed
          +1 compile 9m 44s trunk passed
          +1 checkstyle 0m 29s trunk passed
          +1 mvnsite 1m 0s trunk passed
          +1 mvneclipse 0m 18s trunk passed
          +1 findbugs 1m 25s trunk passed
          +1 javadoc 0m 48s trunk passed
          +1 mvninstall 0m 37s the patch passed
          +1 compile 9m 12s the patch passed
          +1 javac 9m 12s the patch passed
          -0 checkstyle 0m 28s hadoop-common-project/hadoop-common: The patch generated 2 new + 3 unchanged - 0 fixed = 5 total (was 3)
          +1 mvnsite 0m 59s the patch passed
          +1 mvneclipse 0m 18s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 32s the patch passed
          +1 javadoc 0m 48s the patch passed
          -1 unit 7m 40s hadoop-common in the patch failed.
          +1 asflicense 0m 33s The patch does not generate ASF License warnings.
          51m 30s



          Reason Tests
          Failed junit tests hadoop.security.TestAuthenticationFilter



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13119
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847767/HADOOP-13119.003.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 36d0f47d21e7 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 7ee8be1
          Default Java 1.8.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11446/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11446/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11446/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11446/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 13m 38s trunk passed +1 compile 9m 44s trunk passed +1 checkstyle 0m 29s trunk passed +1 mvnsite 1m 0s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 1m 25s trunk passed +1 javadoc 0m 48s trunk passed +1 mvninstall 0m 37s the patch passed +1 compile 9m 12s the patch passed +1 javac 9m 12s the patch passed -0 checkstyle 0m 28s hadoop-common-project/hadoop-common: The patch generated 2 new + 3 unchanged - 0 fixed = 5 total (was 3) +1 mvnsite 0m 59s the patch passed +1 mvneclipse 0m 18s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 32s the patch passed +1 javadoc 0m 48s the patch passed -1 unit 7m 40s hadoop-common in the patch failed. +1 asflicense 0m 33s The patch does not generate ASF License warnings. 51m 30s Reason Tests Failed junit tests hadoop.security.TestAuthenticationFilter Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13119 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847767/HADOOP-13119.003.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 36d0f47d21e7 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 7ee8be1 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11446/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11446/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11446/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11446/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          yuanbo Yuanbo Liu added a comment -

          upload v4 patch to address failure test case.

          Show
          yuanbo Yuanbo Liu added a comment - upload v4 patch to address failure test case.
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 18s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 14m 2s trunk passed
          +1 compile 10m 29s trunk passed
          +1 checkstyle 0m 29s trunk passed
          +1 mvnsite 1m 0s trunk passed
          +1 mvneclipse 0m 18s trunk passed
          +1 findbugs 1m 29s trunk passed
          +1 javadoc 0m 47s trunk passed
          +1 mvninstall 0m 38s the patch passed
          +1 compile 10m 21s the patch passed
          +1 javac 10m 21s the patch passed
          -0 checkstyle 0m 29s hadoop-common-project/hadoop-common: The patch generated 2 new + 6 unchanged - 3 fixed = 8 total (was 9)
          +1 mvnsite 1m 8s the patch passed
          +1 mvneclipse 0m 17s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 47s the patch passed
          +1 javadoc 0m 49s the patch passed
          +1 unit 9m 39s hadoop-common in the patch passed.
          +1 asflicense 0m 29s The patch does not generate ASF License warnings.
          56m 10s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13119
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847779/HADOOP-13119.004.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 79316ff6da37 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 7ee8be1
          Default Java 1.8.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11447/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11447/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11447/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 18s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 14m 2s trunk passed +1 compile 10m 29s trunk passed +1 checkstyle 0m 29s trunk passed +1 mvnsite 1m 0s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 1m 29s trunk passed +1 javadoc 0m 47s trunk passed +1 mvninstall 0m 38s the patch passed +1 compile 10m 21s the patch passed +1 javac 10m 21s the patch passed -0 checkstyle 0m 29s hadoop-common-project/hadoop-common: The patch generated 2 new + 6 unchanged - 3 fixed = 8 total (was 9) +1 mvnsite 1m 8s the patch passed +1 mvneclipse 0m 17s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 47s the patch passed +1 javadoc 0m 49s the patch passed +1 unit 9m 39s hadoop-common in the patch passed. +1 asflicense 0m 29s The patch does not generate ASF License warnings. 56m 10s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13119 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847779/HADOOP-13119.004.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 79316ff6da37 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 7ee8be1 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11447/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11447/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11447/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          eyang Eric Yang added a comment -

          Hi Yuanbo Liu,

          Could you fix the newly introduced style check errors? Thanks

          Show
          eyang Eric Yang added a comment - Hi Yuanbo Liu , Could you fix the newly introduced style check errors? Thanks
          Hide
          yuanbo Yuanbo Liu added a comment -

          Eric Yang Thanks for reviewing. Upload v5 patch.

          Show
          yuanbo Yuanbo Liu added a comment - Eric Yang Thanks for reviewing. Upload v5 patch.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 14s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 13m 14s trunk passed
          -1 compile 10m 6s root in trunk failed.
          +1 checkstyle 0m 30s trunk passed
          +1 mvnsite 1m 4s trunk passed
          +1 mvneclipse 0m 17s trunk passed
          +1 findbugs 1m 28s trunk passed
          +1 javadoc 0m 48s trunk passed
          +1 mvninstall 0m 38s the patch passed
          +1 compile 13m 47s the patch passed
          -1 javac 13m 47s root generated 492 new + 183 unchanged - 0 fixed = 675 total (was 183)
          +1 checkstyle 0m 27s hadoop-common-project/hadoop-common: The patch generated 0 new + 6 unchanged - 3 fixed = 6 total (was 9)
          +1 mvnsite 0m 59s the patch passed
          +1 mvneclipse 0m 17s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 41s the patch passed
          +1 javadoc 0m 49s the patch passed
          -1 unit 8m 14s hadoop-common in the patch failed.
          +1 asflicense 0m 34s The patch does not generate ASF License warnings.
          56m 54s



          Reason Tests
          Failed junit tests hadoop.fs.viewfs.TestViewFsTrash



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13119
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847998/HADOOP-13119.005.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 242a8dd23b7b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / e224c96
          Default Java 1.8.0_111
          compile https://builds.apache.org/job/PreCommit-HADOOP-Build/11453/artifact/patchprocess/branch-compile-root.txt
          findbugs v3.0.0
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/11453/artifact/patchprocess/diff-compile-javac-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11453/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11453/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11453/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 13m 14s trunk passed -1 compile 10m 6s root in trunk failed. +1 checkstyle 0m 30s trunk passed +1 mvnsite 1m 4s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 1m 28s trunk passed +1 javadoc 0m 48s trunk passed +1 mvninstall 0m 38s the patch passed +1 compile 13m 47s the patch passed -1 javac 13m 47s root generated 492 new + 183 unchanged - 0 fixed = 675 total (was 183) +1 checkstyle 0m 27s hadoop-common-project/hadoop-common: The patch generated 0 new + 6 unchanged - 3 fixed = 6 total (was 9) +1 mvnsite 0m 59s the patch passed +1 mvneclipse 0m 17s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 41s the patch passed +1 javadoc 0m 49s the patch passed -1 unit 8m 14s hadoop-common in the patch failed. +1 asflicense 0m 34s The patch does not generate ASF License warnings. 56m 54s Reason Tests Failed junit tests hadoop.fs.viewfs.TestViewFsTrash Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13119 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847998/HADOOP-13119.005.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 242a8dd23b7b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e224c96 Default Java 1.8.0_111 compile https://builds.apache.org/job/PreCommit-HADOOP-Build/11453/artifact/patchprocess/branch-compile-root.txt findbugs v3.0.0 javac https://builds.apache.org/job/PreCommit-HADOOP-Build/11453/artifact/patchprocess/diff-compile-javac-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11453/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11453/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11453/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          aw Allen Wittenauer added a comment -

          I did a quick read through the JIRA, so I apologize if I've missed something. But I think there has been a big misunderstanding from the original design intent in this JIRA:

          Would expect as a user to be able to web interface logs link.

          From what I remember, there are certain sets of links that were designed not to be open to end users in any way, shape, or form because they have a tendency to leak sensitive information in "real world" use cases. /logs, for example, exposes file and directory names, amongst other info.

          The dfs.cluster.administrators setting was intended to give access to those "admin-only" links. (This clearly predates YARN.) The users in this group should almost certainly not be proxiable accounts, as it opens up a whole new can of security worms with regards to secondary systems; does your workflow scheduler allow anyone to run as any other user?

          That said, I could see under extremely limited circumstances why proxying might be necessary. This falls under "enough rope to hang yourself"--it's a bad idea, but sometimes you have no choice. As long as one is careful, it might work out ok.

          Show
          aw Allen Wittenauer added a comment - I did a quick read through the JIRA, so I apologize if I've missed something. But I think there has been a big misunderstanding from the original design intent in this JIRA: Would expect as a user to be able to web interface logs link. From what I remember, there are certain sets of links that were designed not to be open to end users in any way, shape, or form because they have a tendency to leak sensitive information in "real world" use cases. /logs, for example, exposes file and directory names, amongst other info. The dfs.cluster.administrators setting was intended to give access to those "admin-only" links. (This clearly predates YARN.) The users in this group should almost certainly not be proxiable accounts, as it opens up a whole new can of security worms with regards to secondary systems; does your workflow scheduler allow anyone to run as any other user? That said, I could see under extremely limited circumstances why proxying might be necessary. This falls under "enough rope to hang yourself"--it's a bad idea, but sometimes you have no choice. As long as one is careful, it might work out ok.
          Hide
          yuanbo Yuanbo Liu added a comment -

          Allen Wittenauer Thanks for your response.

          I did a quick read through the...

          Sorry, the old discussion may be a little confusing here, so I'd like to clarify it at first.
          When security is enabled in Hadoop, Knox cannot access "/logs", and adding user "knox" to "dfs.cluster.administrators" seems to be the only way to let customer access the link by Knox. As you mentioned above, the users in this group should almost certainly not be proxiable accounts, which I agree with you. So we should extend the ability of http filter to support proxy user. That means when user "sam" wants to access "/logs" of secure hadoop by Knox, we just need to add "sam" to "dfs.cluster.administrators" and make user "knox" impersonate sam, user "knox" responses to authentication requirement while user "sam" responses to authorization requirement. In the end, user "sam" can access the link "/logs".

          allow anyone to run as any other user

          The answer is absolutely no, this is not the purpose of this JIRA. I just want to extent the function of SPNEGO filter and let it support impersonation.

          extremely limited circumstances why proxying might be necessary

          When I dig it more, I find the filter chains in different Hadoop components are quite variable and of course we want to uniform them. When comes to YARN or Job History Server, we want to use SPENGO filter instead of delegation filter, which is clearly supported by Hadoop(We can find the introduction in Hadoop docs), then the proxying becomes quite hot, because there're a lot of application users in YARN. From the security perspective, when Knox accesses Yarn application links, we don't want to have only one user "knox", and we need user "knox" impersonates different users. So extending SPNEGO filter's function is needed.
          Hope my rely can answer your doubts. Any further comment will be appreciated. Thanks a lot!

          Show
          yuanbo Yuanbo Liu added a comment - Allen Wittenauer Thanks for your response. I did a quick read through the... Sorry, the old discussion may be a little confusing here, so I'd like to clarify it at first. When security is enabled in Hadoop, Knox cannot access "/logs", and adding user "knox" to "dfs.cluster.administrators" seems to be the only way to let customer access the link by Knox. As you mentioned above, the users in this group should almost certainly not be proxiable accounts, which I agree with you. So we should extend the ability of http filter to support proxy user. That means when user "sam" wants to access "/logs" of secure hadoop by Knox, we just need to add "sam" to "dfs.cluster.administrators" and make user "knox" impersonate sam, user "knox" responses to authentication requirement while user "sam" responses to authorization requirement. In the end, user "sam" can access the link "/logs". allow anyone to run as any other user The answer is absolutely no, this is not the purpose of this JIRA. I just want to extent the function of SPNEGO filter and let it support impersonation. extremely limited circumstances why proxying might be necessary When I dig it more, I find the filter chains in different Hadoop components are quite variable and of course we want to uniform them. When comes to YARN or Job History Server, we want to use SPENGO filter instead of delegation filter, which is clearly supported by Hadoop(We can find the introduction in Hadoop docs), then the proxying becomes quite hot, because there're a lot of application users in YARN. From the security perspective, when Knox accesses Yarn application links, we don't want to have only one user "knox", and we need user "knox" impersonates different users. So extending SPNEGO filter's function is needed. Hope my rely can answer your doubts. Any further comment will be appreciated. Thanks a lot!
          Hide
          eyang Eric Yang added a comment -

          Retrigger test case, the current test failure seems unrelated to this patch.

          Show
          eyang Eric Yang added a comment - Retrigger test case, the current test failure seems unrelated to this patch.
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 13m 27s trunk passed
          +1 compile 12m 46s trunk passed
          +1 checkstyle 0m 28s trunk passed
          +1 mvnsite 1m 1s trunk passed
          +1 mvneclipse 0m 18s trunk passed
          +1 findbugs 1m 26s trunk passed
          +1 javadoc 0m 47s trunk passed
          +1 mvninstall 0m 36s the patch passed
          +1 compile 11m 8s the patch passed
          +1 javac 11m 8s the patch passed
          +1 checkstyle 0m 28s hadoop-common-project/hadoop-common: The patch generated 0 new + 6 unchanged - 3 fixed = 6 total (was 9)
          +1 mvnsite 0m 58s the patch passed
          +1 mvneclipse 0m 18s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 31s the patch passed
          +1 javadoc 0m 47s the patch passed
          +1 unit 7m 44s hadoop-common in the patch passed.
          +1 asflicense 0m 33s The patch does not generate ASF License warnings.
          56m 24s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13119
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848691/HADOOP-13119.005.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 636ee83783d1 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / ccf2d66
          Default Java 1.8.0_121
          findbugs v3.0.0
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11487/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11487/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 13m 27s trunk passed +1 compile 12m 46s trunk passed +1 checkstyle 0m 28s trunk passed +1 mvnsite 1m 1s trunk passed +1 mvneclipse 0m 18s trunk passed +1 findbugs 1m 26s trunk passed +1 javadoc 0m 47s trunk passed +1 mvninstall 0m 36s the patch passed +1 compile 11m 8s the patch passed +1 javac 11m 8s the patch passed +1 checkstyle 0m 28s hadoop-common-project/hadoop-common: The patch generated 0 new + 6 unchanged - 3 fixed = 6 total (was 9) +1 mvnsite 0m 58s the patch passed +1 mvneclipse 0m 18s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 31s the patch passed +1 javadoc 0m 47s the patch passed +1 unit 7m 44s hadoop-common in the patch passed. +1 asflicense 0m 33s The patch does not generate ASF License warnings. 56m 24s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13119 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848691/HADOOP-13119.005.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 636ee83783d1 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / ccf2d66 Default Java 1.8.0_121 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11487/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11487/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          eyang Eric Yang added a comment -

          +1 looks good. I just committed this. Thank you, Yuanbo.

          Show
          eyang Eric Yang added a comment - +1 looks good. I just committed this. Thank you, Yuanbo.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11158 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11158/)
          HADOOP-13119. Add ability to secure log servlet using proxy users. (eyang: rev a847903b6e64c6edb11d852b91f2c816b1253eb3)

          • (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
          • (add) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServerWithSpengo.java
          • (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationWithProxyUserFilter.java
          • (add) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationWithProxyUserFilter.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11158 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11158/ ) HADOOP-13119 . Add ability to secure log servlet using proxy users. (eyang: rev a847903b6e64c6edb11d852b91f2c816b1253eb3) (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java (add) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServerWithSpengo.java (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationWithProxyUserFilter.java (add) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationWithProxyUserFilter.java
          Hide
          djp Junping Du added a comment -

          2.8.0 is frozen for any unblocker commits. Revert it from branch-2.8.0.

          Show
          djp Junping Du added a comment - 2.8.0 is frozen for any unblocker commits. Revert it from branch-2.8.0.
          Hide
          yuanbo Yuanbo Liu added a comment - - edited

          Reopen it. For some links(such as "/jmx, /stack"), blocking the links in filter chain because of impersonation issue is not friendly for users. For example, user "sam" is not allowed to be impersonated by user "knox", the link "/jmx" doesn't need any user to do authorization by default, and it only needs user "knox" to do authentication, in this case, it's not right to block the access in SPNEGO filter. We intend to verify the impersonation when the method "getRemoteUser" of request is used, so that such kind of links would not be blocked by mistake. I will attach a new patch ASAP.

          Show
          yuanbo Yuanbo Liu added a comment - - edited Reopen it. For some links(such as "/jmx, /stack"), blocking the links in filter chain because of impersonation issue is not friendly for users. For example, user "sam" is not allowed to be impersonated by user "knox", the link "/jmx" doesn't need any user to do authorization by default, and it only needs user "knox" to do authentication, in this case, it's not right to block the access in SPNEGO filter. We intend to verify the impersonation when the method "getRemoteUser" of request is used, so that such kind of links would not be blocked by mistake. I will attach a new patch ASAP.
          Hide
          yuanbo Yuanbo Liu added a comment -

          It would be great if any committer can help me revert my patch so that I can provide a new patch for this issue. Thanks in advance!

          Show
          yuanbo Yuanbo Liu added a comment - It would be great if any committer can help me revert my patch so that I can provide a new patch for this issue. Thanks in advance!
          Hide
          aw Allen Wittenauer added a comment -

          Re-resolving this as this was committed to 3.0.0-alpha2 as well, despite it missing from the fix field. Since it's already been committed and released, we can't revert it or re-open this JIRA.

          You'll need to open a new JIRA with a code fix.

          Show
          aw Allen Wittenauer added a comment - Re-resolving this as this was committed to 3.0.0-alpha2 as well, despite it missing from the fix field. Since it's already been committed and released, we can't revert it or re-open this JIRA. You'll need to open a new JIRA with a code fix.
          Hide
          eyang Eric Yang added a comment -

          Yuanbo, I would recommend to open a new JIRA for the new problem that you found. The original JIRA did not mention about /jmx and there are good reason to keep jmx readable by system users only. For example, Ambari metrics system suppose to have access to /jmx to collect stats. ams is not impersonating to access /jmx. In this case, it should fall back to reporting ams as remote user. Some links should be treated differently when they are system facing vs user facing. Let's not mutate the JIRA for the newly founded use case. Thank you

          Show
          eyang Eric Yang added a comment - Yuanbo, I would recommend to open a new JIRA for the new problem that you found. The original JIRA did not mention about /jmx and there are good reason to keep jmx readable by system users only. For example, Ambari metrics system suppose to have access to /jmx to collect stats. ams is not impersonating to access /jmx. In this case, it should fall back to reporting ams as remote user. Some links should be treated differently when they are system facing vs user facing. Let's not mutate the JIRA for the newly founded use case. Thank you
          Hide
          yuanbo Yuanbo Liu added a comment -

          Allen Wittenauer and Eric Yang Thanks for your response.
          I will raise another JIRA to fix it. Thanks again!

          Show
          yuanbo Yuanbo Liu added a comment - Allen Wittenauer and Eric Yang Thanks for your response. I will raise another JIRA to fix it. Thanks again!
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          2.8.1 became a security release. Moving fix-version to 2.8.2 after the fact.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - 2.8.1 became a security release. Moving fix-version to 2.8.2 after the fact.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          This change looks incompatible. It breaks doAs for kerberized clusters that allow anonymous auth on the RM webserver. It is not a secure setup but I am sure it is being used.

          Exact exception below (also HADOOP-14728):

          $ curl -ik 'http://w.x.y.z:8088/ws/v1/cluster/appstatistics/?doAs=guest'
          HTTP/1.1 500 Null user
          Cache-Control: must-revalidate,no-cache,no-store
          Date: Fri, 11 Aug 2017 06:45:28 GMT
          Pragma: no-cache
          Date: Fri, 11 Aug 2017 06:45:28 GMT
          Pragma: no-cache
          Content-Type: text/html; charset=iso-8859-1
          Content-Length: 4288
          Server: Jetty(6.1.26.hwx)
          
          <html>
          <head>
          <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
          <title>Error 500 Null user</title>
          </head>
          <body><h2>HTTP ERROR 500</h2>
          <p>Problem accessing /ws/v1/cluster/appstatistics/. Reason:
          <pre>    Null user</pre></p><h3>Caused by:</h3><pre>java.lang.IllegalArgumentException: Null user
            at org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1409)
            at org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1396)
            at org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:81)
            at org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92)
            at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:95)
            at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
            at org.apache.hadoop.security.AuthenticationWithProxyUserFilter.doFilter(AuthenticationWithProxyUserFilter.java:101)
            at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:576)
            at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
            at org.apache.hadoop.security.http.CrossOriginFilter.doFilter(CrossOriginFilter.java:95)
            at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
            at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
            at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
            at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
            at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
            at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
            at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
            at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
            at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
            at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
            at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
            at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
            at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
            at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
            at org.mortbay.jetty.Server.handle(Server.java:326)
            at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
            at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
            at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
            at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
            at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
            at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
            at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
          </pre>
          <hr /><i><small>Powered by Jetty://</small></i><br/>
          </body>
          </html>
          

          This worked prior to HADOOP-13119.

          $ curl -ik 'http://w.x.y.z:8088/ws/v1/cluster/appstatistics/?doAs=guest'
          HTTP/1.1 200 OK
          Cache-Control: no-cache
          Expires: Fri, 11 Aug 2017 06:41:24 GMT
          Date: Fri, 11 Aug 2017 06:41:24 GMT
          Pragma: no-cache
          Expires: Fri, 11 Aug 2017 06:41:24 GMT
          Date: Fri, 11 Aug 2017 06:41:24 GMT
          Pragma: no-cache
          Content-Type: application/json
          X-Frame-Options: SAMEORIGIN
          Transfer-Encoding: chunked
          Server: Jetty(6.1.26.hwx)
          
          {"appStatInfo":{"statItem":[{"state":"ACCEPTED","type":"*","count":0},{"state":"KILLED","type":"*","count":0},{"state":"NEW","type":"*","count":0},{"state":"FAILED","type":"*","count":14},{"state":"SUBMITTED","type":"*","count":0},{"state":"FINISHED","type":"*","count":932},{"state":"NEW_SAVING","type":"*","count":0},{"state":"RUNNING","type":"*","count":0}]}}
          

          Unfortunately this change was released in 2.7.4 but it should probably be reverted it from 2.8.2, 2.7.5 and 2.9.0.

          cc larry mccay and found by Krishna Pandey.

          Show
          arpitagarwal Arpit Agarwal added a comment - This change looks incompatible. It breaks doAs for kerberized clusters that allow anonymous auth on the RM webserver. It is not a secure setup but I am sure it is being used. Exact exception below (also HADOOP-14728 ): $ curl -ik 'http: //w.x.y.z:8088/ws/v1/cluster/appstatistics/?doAs=guest' HTTP/1.1 500 Null user Cache-Control: must-revalidate,no-cache,no-store Date: Fri, 11 Aug 2017 06:45:28 GMT Pragma: no-cache Date: Fri, 11 Aug 2017 06:45:28 GMT Pragma: no-cache Content-Type: text/html; charset=iso-8859-1 Content-Length: 4288 Server: Jetty(6.1.26.hwx) <html> <head> <meta http-equiv= "Content-Type" content= "text/html; charset=ISO-8859-1" /> <title>Error 500 Null user</title> </head> <body><h2>HTTP ERROR 500</h2> <p>Problem accessing /ws/v1/cluster/appstatistics/. Reason: <pre> Null user</pre></p><h3>Caused by:</h3><pre>java.lang.IllegalArgumentException: Null user at org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1409) at org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1396) at org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:81) at org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92) at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:95) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.security.AuthenticationWithProxyUserFilter.doFilter(AuthenticationWithProxyUserFilter.java:101) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:576) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.security.http.CrossOriginFilter.doFilter(CrossOriginFilter.java:95) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) </pre> <hr /><i><small>Powered by Jetty: //</small></i><br/> </body> </html> This worked prior to HADOOP-13119 . $ curl -ik 'http: //w.x.y.z:8088/ws/v1/cluster/appstatistics/?doAs=guest' HTTP/1.1 200 OK Cache-Control: no-cache Expires: Fri, 11 Aug 2017 06:41:24 GMT Date: Fri, 11 Aug 2017 06:41:24 GMT Pragma: no-cache Expires: Fri, 11 Aug 2017 06:41:24 GMT Date: Fri, 11 Aug 2017 06:41:24 GMT Pragma: no-cache Content-Type: application/json X-Frame-Options: SAMEORIGIN Transfer-Encoding: chunked Server: Jetty(6.1.26.hwx) { "appStatInfo" :{ "statItem" :[{ "state" : "ACCEPTED" , "type" : "*" , "count" :0},{ "state" : "KILLED" , "type" : "*" , "count" :0},{ "state" : "NEW" , "type" : "*" , "count" :0},{ "state" : "FAILED" , "type" : "*" , "count" :14},{ "state" : "SUBMITTED" , "type" : "*" , "count" :0},{ "state" : "FINISHED" , "type" : "*" , "count" :932},{ "state" : "NEW_SAVING" , "type" : "*" , "count" :0},{ "state" : "RUNNING" , "type" : "*" , "count" :0}]}} Unfortunately this change was released in 2.7.4 but it should probably be reverted it from 2.8.2, 2.7.5 and 2.9.0. cc larry mccay and found by Krishna Pandey .

            People

            • Assignee:
              yuanbo Yuanbo Liu
              Reporter:
              jeffreyr97 Jeffrey E Rodriguez
            • Votes:
              0 Vote for this issue
              Watchers:
              18 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development