Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: security
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      To protect against CSRF attacks, HADOOP-12691 introduces a CSRF filter that will require a specific HTTP header to be sent with every REST API call. This will affect all API consumers from web apps to CLIs and curl.

      Since CSRF is primarily a browser based attack we can try and minimize the impact on non-browser clients.

      This enhancement will provide additional configuration for identifying non-browser useragents and skipping the enforcement of the header requirement for anything identified as a non-browser. This will largely limit the impact to browser based PUT and POST calls when configured appropriately.

      1. HADOOP-12758-001.patch
        7 kB
        Larry McCay
      2. HADOOP-12758-002.patch
        7 kB
        Larry McCay
      3. HADOOP-12758-003.patch
        10 kB
        Larry McCay
      4. HADOOP-12758-004.patch
        12 kB
        Larry McCay

        Issue Links

          Activity

          Hide
          lmccay Larry McCay added a comment -

          initial patch attached

          Show
          lmccay Larry McCay added a comment - initial patch attached
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 9s Maven dependency ordering for branch
          +1 mvninstall 6m 47s trunk passed
          +1 compile 6m 6s trunk passed with JDK v1.8.0_66
          +1 compile 6m 51s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 20s trunk passed
          +1 mvnsite 1m 4s trunk passed
          +1 mvneclipse 0m 15s trunk passed
          +1 findbugs 1m 45s trunk passed
          +1 javadoc 0m 53s trunk passed with JDK v1.8.0_66
          +1 javadoc 1m 2s trunk passed with JDK v1.7.0_91
          0 mvndep 0m 8s Maven dependency ordering for patch
          +1 mvninstall 0m 40s the patch passed
          +1 compile 6m 3s the patch passed with JDK v1.8.0_66
          +1 javac 6m 3s the patch passed
          +1 compile 6m 41s the patch passed with JDK v1.7.0_91
          +1 javac 6m 41s the patch passed
          -1 checkstyle 0m 21s hadoop-common-project/hadoop-common: patch generated 5 new + 1 unchanged - 0 fixed = 6 total (was 1)
          +1 mvnsite 1m 0s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 2m 1s the patch passed
          +1 javadoc 0m 50s the patch passed with JDK v1.8.0_66
          +1 javadoc 1m 3s the patch passed with JDK v1.7.0_91
          +1 unit 6m 26s hadoop-common in the patch passed with JDK v1.8.0_66.
          +1 unit 6m 43s hadoop-common in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 23s Patch does not generate ASF License warnings.
          59m 1s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12785858/HADOOP-12758-001.patch
          JIRA Issue HADOOP-12758
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 8324ddfcff8e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 4ae543f
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/8512/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8512/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Max memory used 76MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8512/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 9s Maven dependency ordering for branch +1 mvninstall 6m 47s trunk passed +1 compile 6m 6s trunk passed with JDK v1.8.0_66 +1 compile 6m 51s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 20s trunk passed +1 mvnsite 1m 4s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 1m 45s trunk passed +1 javadoc 0m 53s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 2s trunk passed with JDK v1.7.0_91 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 0m 40s the patch passed +1 compile 6m 3s the patch passed with JDK v1.8.0_66 +1 javac 6m 3s the patch passed +1 compile 6m 41s the patch passed with JDK v1.7.0_91 +1 javac 6m 41s the patch passed -1 checkstyle 0m 21s hadoop-common-project/hadoop-common: patch generated 5 new + 1 unchanged - 0 fixed = 6 total (was 1) +1 mvnsite 1m 0s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 1s the patch passed +1 javadoc 0m 50s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 3s the patch passed with JDK v1.7.0_91 +1 unit 6m 26s hadoop-common in the patch passed with JDK v1.8.0_66. +1 unit 6m 43s hadoop-common in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 23s Patch does not generate ASF License warnings. 59m 1s Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12785858/HADOOP-12758-001.patch JIRA Issue HADOOP-12758 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 8324ddfcff8e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4ae543f Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/8512/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8512/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8512/console This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          Hi Larry McCay.

          This looks good overall. It looks like some logic was lifted from AltKerberosAuthenticationHandler. Unfortunately, this also picked up some pre-existing Checkstyle violations that were in that class. Could you review the Checkstyle report and address it here in the CSRF filter? (No need to clean up AltKerberosAuthenticationHandler in the scope of this JIRA though.)

          Also, the JavaDocs for isBrowser don't make sense here.

          Show
          cnauroth Chris Nauroth added a comment - Hi Larry McCay . This looks good overall. It looks like some logic was lifted from AltKerberosAuthenticationHandler . Unfortunately, this also picked up some pre-existing Checkstyle violations that were in that class. Could you review the Checkstyle report and address it here in the CSRF filter? (No need to clean up AltKerberosAuthenticationHandler in the scope of this JIRA though.) Also, the JavaDocs for isBrowser don't make sense here.
          Hide
          lmccay@apache.org larry mccay added a comment -

          Will do.

          Thanks, Chris.

          On Tue, Feb 2, 2016 at 5:53 PM, Chris Nauroth (JIRA) <jira@apache.org>

          Show
          lmccay@apache.org larry mccay added a comment - Will do. Thanks, Chris. On Tue, Feb 2, 2016 at 5:53 PM, Chris Nauroth (JIRA) <jira@apache.org>
          Hide
          anu Anu Engineer added a comment -

          Hi Larry McCay

          Patch looks good and thank you for addressing this critical security concern. I really appreciate it.

          However I have a higher level question. With this patch, we are going to allow non-browser based clients (curl, java, perl and wget) to work without the webHDFS XSRF header.

          But if a user is not able to connect to WebHDFS from a Web page due to XSRF error, he/she now cannot use these tools since we have extra code that allows them to bypass the XSRF security check.
          I am afraid with this patch we are taking away a really good debug tool from the users and perhaps going to create a bunch of confused webHDFS users.

          My question is: Is this complexity worth it ? if a user enables XSRF check based on your older patch, in most cases the overhead is a hash table lookup and parsing of the header to check the verb.
          I know this is a cool and logically correct optimization to have, but I am worried that this optimization would only create pain for the user, whereas gains are relatively minor.

          I do see one use case though - where you want to enable XSRF on a cluster but allow older curl based clients to continue to operate. But if that is the use case (and you really have lots of curl based scripts) then and only then, this makes sense. Even then I would actually prefer to modify older scripts than have these kind of surprises. If I am missing something here, please do let me know.

          Show
          anu Anu Engineer added a comment - Hi Larry McCay Patch looks good and thank you for addressing this critical security concern. I really appreciate it. However I have a higher level question. With this patch, we are going to allow non-browser based clients (curl, java, perl and wget) to work without the webHDFS XSRF header. But if a user is not able to connect to WebHDFS from a Web page due to XSRF error, he/she now cannot use these tools since we have extra code that allows them to bypass the XSRF security check. I am afraid with this patch we are taking away a really good debug tool from the users and perhaps going to create a bunch of confused webHDFS users. My question is: Is this complexity worth it ? if a user enables XSRF check based on your older patch, in most cases the overhead is a hash table lookup and parsing of the header to check the verb. I know this is a cool and logically correct optimization to have, but I am worried that this optimization would only create pain for the user, whereas gains are relatively minor. I do see one use case though - where you want to enable XSRF on a cluster but allow older curl based clients to continue to operate. But if that is the use case (and you really have lots of curl based scripts) then and only then, this makes sense. Even then I would actually prefer to modify older scripts than have these kind of surprises. If I am missing something here, please do let me know.
          Hide
          lmccay Larry McCay added a comment -

          Hi Anu Engineer - I believe that the behavior differences that you describe are certainly possible but that the impact of breaking existing clients that aren't even vulnerable to or a source of the attack that we are protected against is worse.

          The error returned via:

          ((HttpServletResponse)response).sendError(
          HttpServletResponse.SC_BAD_REQUEST,
          "Missing Required Header for Vulnerability Protection");

          Should provide some level of diagnostic clarity.
          If we find that we need more than we can do more there.

          Being so strict with this enforcement that scripting with curl, groovy, java/python/perl clients all break will likely result in the filter not being used instead of clients being changed.

          Does this make sense?

          Show
          lmccay Larry McCay added a comment - Hi Anu Engineer - I believe that the behavior differences that you describe are certainly possible but that the impact of breaking existing clients that aren't even vulnerable to or a source of the attack that we are protected against is worse. The error returned via: ((HttpServletResponse)response).sendError( HttpServletResponse.SC_BAD_REQUEST, "Missing Required Header for Vulnerability Protection"); Should provide some level of diagnostic clarity. If we find that we need more than we can do more there. Being so strict with this enforcement that scripting with curl, groovy, java/python/perl clients all break will likely result in the filter not being used instead of clients being changed. Does this make sense?
          Hide
          anu Anu Engineer added a comment -

          impact of breaking existing clients that aren't even vulnerable

          I do see your point, but from that angle your fix is incomplete in the sense that there are more clients like this in the world.

          It is just not web pages if I am using Ruby or python , I need to add this flag, but if am using Java or Perl, I don't need to. Confusing, right ?
          I would argue that it is not possible to enumerate all your clients hence you shouldn't try to.

          But if you are concerned XSRF fix will not be used without this modification, I am not dead against it. I would argue it is in the best interest of user to switch on the XSRF and most of our users are smart enough to understand it.
          I just feel that we have way too many of these special cases in HDFS world – which are against the Principle of least surprise or Principle of least Astonishment.

          Should provide some level of diagnostic clarity.

          I completely agree that the error message is pretty good, but what concerns me why bypass it for a set of arbitrarily chosen clients ?

          Show
          anu Anu Engineer added a comment - impact of breaking existing clients that aren't even vulnerable I do see your point, but from that angle your fix is incomplete in the sense that there are more clients like this in the world. It is just not web pages if I am using Ruby or python , I need to add this flag, but if am using Java or Perl, I don't need to. Confusing, right ? I would argue that it is not possible to enumerate all your clients hence you shouldn't try to. But if you are concerned XSRF fix will not be used without this modification, I am not dead against it. I would argue it is in the best interest of user to switch on the XSRF and most of our users are smart enough to understand it. I just feel that we have way too many of these special cases in HDFS world – which are against the Principle of least surprise or Principle of least Astonishment. Should provide some level of diagnostic clarity. I completely agree that the error message is pretty good, but what concerns me why bypass it for a set of arbitrarily chosen clients ?
          Hide
          lmccay Larry McCay added a comment -

          Patch to address checkstyle errors and javdocs

          Show
          lmccay Larry McCay added a comment - Patch to address checkstyle errors and javdocs
          Hide
          lmccay Larry McCay added a comment -

          Anu Engineer - we can certainly add to the list of non-browser defaults.
          I don't happen to know the user-agent for pyton or ruby http client libraries.
          If you do and think that we should add them to the default - let me know.

          Show
          lmccay Larry McCay added a comment - Anu Engineer - we can certainly add to the list of non-browser defaults. I don't happen to know the user-agent for pyton or ruby http client libraries. If you do and think that we should add them to the default - let me know.
          Hide
          anu Anu Engineer added a comment -

          No Larry, I don't want to add user-agent for other libraries. I think you are missing my point. My point was that we should not special case for various user-agents. I think we should have XSRF enabled by default in trunk (your last patch) and let clients use the right headers if they want to use it in 2.x branch. This special casing leads to lots of corner cases which are not very useful.

          Btw, I don't think many of these client libraries - like Python / Ruby are well behaved or have standard user-agent headers. You can add a header if you need, but lot of them have no standard user agent.

          Also with this feature an end-user can override an administrator controlled cluster wide setting. For example, I can have a user-agent spoofing chrome extension and override the XSRF setting by setting it to "curl" even if I were using a browser. Worse, if I did that , as a user I could be open to a XSRF attack.

          Show
          anu Anu Engineer added a comment - No Larry, I don't want to add user-agent for other libraries. I think you are missing my point. My point was that we should not special case for various user-agents. I think we should have XSRF enabled by default in trunk (your last patch) and let clients use the right headers if they want to use it in 2.x branch. This special casing leads to lots of corner cases which are not very useful. Btw, I don't think many of these client libraries - like Python / Ruby are well behaved or have standard user-agent headers. You can add a header if you need, but lot of them have no standard user agent. Also with this feature an end-user can override an administrator controlled cluster wide setting. For example, I can have a user-agent spoofing chrome extension and override the XSRF setting by setting it to "curl" even if I were using a browser. Worse, if I did that , as a user I could be open to a XSRF attack.
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 11s Maven dependency ordering for branch
          +1 mvninstall 7m 41s trunk passed
          +1 compile 7m 54s trunk passed with JDK v1.8.0_66
          +1 compile 8m 41s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 25s trunk passed
          +1 mvnsite 1m 17s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 2m 5s trunk passed
          +1 javadoc 1m 4s trunk passed with JDK v1.8.0_66
          +1 javadoc 1m 12s trunk passed with JDK v1.7.0_91
          0 mvndep 0m 9s Maven dependency ordering for patch
          +1 mvninstall 0m 48s the patch passed
          +1 compile 8m 42s the patch passed with JDK v1.8.0_66
          +1 javac 8m 42s the patch passed
          +1 compile 8m 34s the patch passed with JDK v1.7.0_91
          +1 javac 8m 34s the patch passed
          +1 checkstyle 0m 25s the patch passed
          +1 mvnsite 1m 17s the patch passed
          +1 mvneclipse 0m 16s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 2m 27s the patch passed
          +1 javadoc 1m 6s the patch passed with JDK v1.8.0_66
          +1 javadoc 1m 18s the patch passed with JDK v1.7.0_91
          +1 unit 9m 25s hadoop-common in the patch passed with JDK v1.8.0_66.
          +1 unit 9m 21s hadoop-common in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 25s Patch does not generate ASF License warnings.
          76m 19s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12785897/HADOOP-12758-002.patch
          JIRA Issue HADOOP-12758
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 7aefa2993362 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / ccbba4a
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8513/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Max memory used 77MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8513/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 11s Maven dependency ordering for branch +1 mvninstall 7m 41s trunk passed +1 compile 7m 54s trunk passed with JDK v1.8.0_66 +1 compile 8m 41s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 25s trunk passed +1 mvnsite 1m 17s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 2m 5s trunk passed +1 javadoc 1m 4s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 12s trunk passed with JDK v1.7.0_91 0 mvndep 0m 9s Maven dependency ordering for patch +1 mvninstall 0m 48s the patch passed +1 compile 8m 42s the patch passed with JDK v1.8.0_66 +1 javac 8m 42s the patch passed +1 compile 8m 34s the patch passed with JDK v1.7.0_91 +1 javac 8m 34s the patch passed +1 checkstyle 0m 25s the patch passed +1 mvnsite 1m 17s the patch passed +1 mvneclipse 0m 16s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 27s the patch passed +1 javadoc 1m 6s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 18s the patch passed with JDK v1.7.0_91 +1 unit 9m 25s hadoop-common in the patch passed with JDK v1.8.0_66. +1 unit 9m 21s hadoop-common in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 25s Patch does not generate ASF License warnings. 76m 19s Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12785897/HADOOP-12758-002.patch JIRA Issue HADOOP-12758 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 7aefa2993362 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / ccbba4a Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8513/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Max memory used 77MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8513/console This message was automatically generated.
          Hide
          lmccay Larry McCay added a comment -

          I am not missing your point.
          I am trying to strike the balance between CSRF protection and breaking existing consumers.
          If the existing consumers that we are talking about were vulnerable to this attack it would be a different story.

          Breaking java clients (server side of webapps, custom CLI apps, Hadoop CLIs, third party integrations), scripting (Ambari calls from python scripts, cron driven curl scripting, groovy based scripting through Knox), generic command line tools (curl, wget), etc - when none of them are vulnerable to the actual attack would be wrong.

          So, I think we can address it a couple ways for user-agents:

          • default with common ones and hopefully not need to configure an override
          • default to non exclusions and require admins to override for any user-agents that are desired

          "Btw, I don't think many of these client libraries - like Python / Ruby are well behaved or have standard user-agent headers. You can add a header if you need, but lot of them have no standard user agent."

          That was my conclusion but was hoping that I was wrong. :/

          Show
          lmccay Larry McCay added a comment - I am not missing your point. I am trying to strike the balance between CSRF protection and breaking existing consumers. If the existing consumers that we are talking about were vulnerable to this attack it would be a different story. Breaking java clients (server side of webapps, custom CLI apps, Hadoop CLIs, third party integrations), scripting (Ambari calls from python scripts, cron driven curl scripting, groovy based scripting through Knox), generic command line tools (curl, wget), etc - when none of them are vulnerable to the actual attack would be wrong. So, I think we can address it a couple ways for user-agents: default with common ones and hopefully not need to configure an override default to non exclusions and require admins to override for any user-agents that are desired "Btw, I don't think many of these client libraries - like Python / Ruby are well behaved or have standard user-agent headers. You can add a header if you need, but lot of them have no standard user agent." That was my conclusion but was hoping that I was wrong. :/
          Hide
          anu Anu Engineer added a comment -

          I am trying to strike the balance between CSRF protection and breaking existing consumers.

          I appreciate that thought. I am trying to make sure that security provided by your last patch is not compromised and end-users have a consistent behavior.

          default to non exclusions and require admins to override for any user-agents that are desired

          I think we should default to non-exclusions, and let admins override for any user agents they want.
          The reason is security. User-agent is a string that can be easily spoofed.
          So assuming that a client is not a browser based on a string like that does not look very secure.

          Show
          anu Anu Engineer added a comment - I am trying to strike the balance between CSRF protection and breaking existing consumers. I appreciate that thought. I am trying to make sure that security provided by your last patch is not compromised and end-users have a consistent behavior. default to non exclusions and require admins to override for any user-agents that are desired I think we should default to non-exclusions, and let admins override for any user agents they want. The reason is security. User-agent is a string that can be easily spoofed. So assuming that a client is not a browser based on a string like that does not look very secure.
          Hide
          lmccay Larry McCay added a comment -

          The nature of the CSRF attack and the protection provided by sending an HTTP header hinges on the facts that:

          a. headers cannot be added by malicious HTML such as FORMs provided on on malicious page
          b. that if javascript where being used to add a header from an origin other than that serves valid pages that the cross origin policies in the browser will not allow it to be added unless it is explicitly allowed

          This is not a security issue.

          As I said earlier, this is an existing pattern in Hadoop. Defaulting it such that all configuration must configure the non-browser user-agents every time makes it inconsistent in behavior to the authentication handler.

          I suggest that this go in as is. Since this is a common filter, individual component uptake of it may do what they want with default values and consistency with the rest of the platform. See HDFS-9711.

          Show
          lmccay Larry McCay added a comment - The nature of the CSRF attack and the protection provided by sending an HTTP header hinges on the facts that: a. headers cannot be added by malicious HTML such as FORMs provided on on malicious page b. that if javascript where being used to add a header from an origin other than that serves valid pages that the cross origin policies in the browser will not allow it to be added unless it is explicitly allowed This is not a security issue. As I said earlier, this is an existing pattern in Hadoop. Defaulting it such that all configuration must configure the non-browser user-agents every time makes it inconsistent in behavior to the authentication handler. I suggest that this go in as is. Since this is a common filter, individual component uptake of it may do what they want with default values and consistency with the rest of the platform. See HDFS-9711 .
          Hide
          cnauroth Chris Nauroth added a comment -

          To summarize the design document Larry attached to HADOOP-12691, this feature is intended to provide protection against browser-based attack vectors. The attack would originate with some form of social engineering, such as a phishing email linking to a malicious web form or piggy-backing on a pre-existing XSS vulnerability. These kinds of attacks specifically target the browser interaction model and not scripts/programmatic access. I think it's acceptable to provide a solution that maximizes backwards-compatibility for programmatic access while still protecting browsers.

          For example, I can have a user-agent spoofing chrome extension and override the XSRF setting by setting it to "curl" even if I were using a browser.

          There would be no value in an attacker setting up an extension like this in their own browser, because an XSRF attack targets the authenticated user. While it's true that the user agent string could be spoofed, it provides no value as an attack vector, because it doesn't provide a way to spoof authentication. Taking the example of the NameNode web UI and WebHDFS, this extension wouldn't give the attacker the capability to do anything that they can't already do. They could simply authenticate, go to the file browser in the NameNode web UI, and manipulate files directly. They can only harm files if they already have access to them.

          If the attacker finds a way to inject such a plugin into a different authenticated user's browser as malware, then that would provide an attack vector. However, at that point the battle is already lost. If the attacker can inject malware, then they can pretty much run arbitrary code and defeat any further protection mechanisms.

          I am in favor of the approach in this patch.

          Show
          cnauroth Chris Nauroth added a comment - To summarize the design document Larry attached to HADOOP-12691 , this feature is intended to provide protection against browser-based attack vectors. The attack would originate with some form of social engineering, such as a phishing email linking to a malicious web form or piggy-backing on a pre-existing XSS vulnerability. These kinds of attacks specifically target the browser interaction model and not scripts/programmatic access. I think it's acceptable to provide a solution that maximizes backwards-compatibility for programmatic access while still protecting browsers. For example, I can have a user-agent spoofing chrome extension and override the XSRF setting by setting it to "curl" even if I were using a browser. There would be no value in an attacker setting up an extension like this in their own browser, because an XSRF attack targets the authenticated user. While it's true that the user agent string could be spoofed, it provides no value as an attack vector, because it doesn't provide a way to spoof authentication. Taking the example of the NameNode web UI and WebHDFS, this extension wouldn't give the attacker the capability to do anything that they can't already do. They could simply authenticate, go to the file browser in the NameNode web UI, and manipulate files directly. They can only harm files if they already have access to them. If the attacker finds a way to inject such a plugin into a different authenticated user's browser as malware, then that would provide an attack vector. However, at that point the battle is already lost. If the attacker can inject malware, then they can pretty much run arbitrary code and defeat any further protection mechanisms. I am in favor of the approach in this patch.
          Hide
          lmccay Larry McCay added a comment -

          Chris Nauroth - thanks for articulating that - I agree with you.

          From an ops perspective we need to minimize what needs to be done by the admin for each component across the platform as much as possible. Having sane defaults for a mechanism like this is very important.

          There may be a third option and that is to reverse the semantics of the excluded user-agents and create a browsers list instead of non-browsers. This would address both the defaults for ops/admins as well as not breaking any non-browsers. Python and Ruby wouldn't need to provide some user-agent to match the list.

          It would likely result in many strings given the variations in the user-agent string based on versions. Perhaps, it would make sense to use a list of regex expressions to match browsers. We could then wildcard a bit and reduce the hassle of maintaining this list with every browser release, etc. It would however have a cost of more complicated configuration - regex.

          If this were palatable to folks then maybe we could add it to AltKerberosAuthenticationHandler as an option as well.

          I am open to other suggestions as well.

          Show
          lmccay Larry McCay added a comment - Chris Nauroth - thanks for articulating that - I agree with you. From an ops perspective we need to minimize what needs to be done by the admin for each component across the platform as much as possible. Having sane defaults for a mechanism like this is very important. There may be a third option and that is to reverse the semantics of the excluded user-agents and create a browsers list instead of non-browsers. This would address both the defaults for ops/admins as well as not breaking any non-browsers. Python and Ruby wouldn't need to provide some user-agent to match the list. It would likely result in many strings given the variations in the user-agent string based on versions. Perhaps, it would make sense to use a list of regex expressions to match browsers. We could then wildcard a bit and reduce the hassle of maintaining this list with every browser release, etc. It would however have a cost of more complicated configuration - regex. If this were palatable to folks then maybe we could add it to AltKerberosAuthenticationHandler as an option as well. I am open to other suggestions as well.
          Hide
          anu Anu Engineer added a comment -

          Hi Chris Nauroth and Larry McCay,

          Thanks for your comments. To make sure we are all on the same page, I would like to summarize what I think is the issue from my point of view.

          1. It introduces an inconsistent behavior among various clients. That is curl and perl would work, but the same HTTP request will not work when you use python and Ruby. That is the biggest concern.
            Most developers will not even realize that we are reading the user agent string on server side to behave differently. This creates subtle behavior differences in a REST protocol like WebHDFS.
          2. Making the first feature a default. Relying on the user-agent so that some older clients can work without modification of code should be a feature that gets enabled by the administrator on a case by case. I think that it should not be a default behavior. The point about curl string or any user agent being easily spoof-able was to reinforce this point. We should not be introducing subtle behavior changes to a REST protocol based on what client it is. We should by all means provide that feature if it makes the life of admins easy. But I don't think we should make it a default.

          Just to re-cap: Issue one is inconsistency and second is making that inconsistency a default choice. I am all for shipping this feature without any default agents pre-baked into code and letting Admins make that choice.

          That way, The default behavior of WebHDFS is consistent whether you use curl, ruby , python, Java, Perl or JavaScript, And Admins always have the option to modify the settings if they deem that they cannot modify any of the older client code.

          Ps. Even if admins enable this feature, it will still break older webHDFS clients that are written in python or ruby. So we will still need to document that fact with this new configuration parameter.

          Show
          anu Anu Engineer added a comment - Hi Chris Nauroth and Larry McCay , Thanks for your comments. To make sure we are all on the same page, I would like to summarize what I think is the issue from my point of view. It introduces an inconsistent behavior among various clients. That is curl and perl would work, but the same HTTP request will not work when you use python and Ruby. That is the biggest concern. Most developers will not even realize that we are reading the user agent string on server side to behave differently. This creates subtle behavior differences in a REST protocol like WebHDFS. Making the first feature a default. Relying on the user-agent so that some older clients can work without modification of code should be a feature that gets enabled by the administrator on a case by case. I think that it should not be a default behavior. The point about curl string or any user agent being easily spoof-able was to reinforce this point. We should not be introducing subtle behavior changes to a REST protocol based on what client it is. We should by all means provide that feature if it makes the life of admins easy. But I don't think we should make it a default. Just to re-cap: Issue one is inconsistency and second is making that inconsistency a default choice . I am all for shipping this feature without any default agents pre-baked into code and letting Admins make that choice. That way, The default behavior of WebHDFS is consistent whether you use curl, ruby , python, Java, Perl or JavaScript, And Admins always have the option to modify the settings if they deem that they cannot modify any of the older client code. Ps. Even if admins enable this feature, it will still break older webHDFS clients that are written in python or ruby. So we will still need to document that fact with this new configuration parameter.
          Hide
          lmccay Larry McCay added a comment -

          Hi Anu Engineer - I appreciate your concerns and believe that the third option that I mentioned may provide a solution for the inconsistency. At the same time provide sane defaults that will allow all clients that are not vulnerable to the attack and should not be subject to the requirements imposed on browsers to continue to work.

          If we reverse the list of non-browsers into a list of browser matching regex expressions then all non-browsers will continue to work.
          The challenge will be to identify the proper set of regex expressions to protect the vast majority of browsers.

          I don't see the requirement that browsers/AJAX provide additional HTTP headers as a change to the REST protocol. This is an access requirement between the browser client and the server and does not involve the semantics of the REST API itself. It is really an application level concern. Similar to authentication mechanisms differing between a web app and a curl invocation.

          So, let's walk through the option 3 description:

          1. default set of browser matching regex expressions in place
          2. all non-browsers continue to work without requiring a header to protect them from an attack that they are not vulnerable to
          3. AJAX calls are changed to send the header based on configured header names or default
          4. user-agents that match a regex pattern have their header validated as being present and therefore from a valid AJAX client
          5. user-agents that do not match a regex pattern do not have their header validated and therefore are vulnerable to CSRF

          #5 above is the concerning bit that is introduced by this option 3.
          We might be able to discern some additional client context from the Referrer header which also cannot be altered by AJAX calls.
          This will require some investigation into what Referrer is (if anything at all) for non-browsers.

          The idea being something like...

          1. If #5 and there is no referrer then we don't validate the header or
          2. if #5 and the referrer matches some additional config for allowed referrers then validate the existence of a the header (this would indicate the use of an unusual browser and should be added to regex patterns)

          That level could be a follow up JIRA since given a sane set of browser regex patterns as defaults this should be an edge case.

          Thoughts?

          Show
          lmccay Larry McCay added a comment - Hi Anu Engineer - I appreciate your concerns and believe that the third option that I mentioned may provide a solution for the inconsistency. At the same time provide sane defaults that will allow all clients that are not vulnerable to the attack and should not be subject to the requirements imposed on browsers to continue to work. If we reverse the list of non-browsers into a list of browser matching regex expressions then all non-browsers will continue to work. The challenge will be to identify the proper set of regex expressions to protect the vast majority of browsers. I don't see the requirement that browsers/AJAX provide additional HTTP headers as a change to the REST protocol. This is an access requirement between the browser client and the server and does not involve the semantics of the REST API itself. It is really an application level concern. Similar to authentication mechanisms differing between a web app and a curl invocation. So, let's walk through the option 3 description: 1. default set of browser matching regex expressions in place 2. all non-browsers continue to work without requiring a header to protect them from an attack that they are not vulnerable to 3. AJAX calls are changed to send the header based on configured header names or default 4. user-agents that match a regex pattern have their header validated as being present and therefore from a valid AJAX client 5. user-agents that do not match a regex pattern do not have their header validated and therefore are vulnerable to CSRF #5 above is the concerning bit that is introduced by this option 3. We might be able to discern some additional client context from the Referrer header which also cannot be altered by AJAX calls. This will require some investigation into what Referrer is (if anything at all) for non-browsers. The idea being something like... 1. If #5 and there is no referrer then we don't validate the header or 2. if #5 and the referrer matches some additional config for allowed referrers then validate the existence of a the header (this would indicate the use of an unusual browser and should be added to regex patterns) That level could be a follow up JIRA since given a sane set of browser regex patterns as defaults this should be an edge case. Thoughts?
          Hide
          anu Anu Engineer added a comment -

          +1, on option 3. I think it does address all my concerns. Thanks for resolving this.

          Show
          anu Anu Engineer added a comment - +1, on option 3. I think it does address all my concerns. Thanks for resolving this.
          Hide
          lmccay Larry McCay added a comment -

          Thank you for raising your concerns, Anu Engineer.

          Show
          lmccay Larry McCay added a comment - Thank you for raising your concerns, Anu Engineer .
          Hide
          lmccay Larry McCay added a comment -

          As I look to add regex patterns to test for option 3 it seems to me that "^Mozilla" would pretty much cover everything that we need to be concerned with in the near term. I'm basing this on: http://www.useragentstring.com/pages/All/

          Considering that we don't need to taylor behavior based on which browser but only whether it is a browser this may be adequate.

          Would anyone have ideas of what I may be missing?

          Show
          lmccay Larry McCay added a comment - As I look to add regex patterns to test for option 3 it seems to me that "^Mozilla" would pretty much cover everything that we need to be concerned with in the near term. I'm basing this on: http://www.useragentstring.com/pages/All/ Considering that we don't need to taylor behavior based on which browser but only whether it is a browser this may be adequate. Would anyone have ideas of what I may be missing?
          Hide
          lmccay Larry McCay added a comment -

          Looks to me like "^Mozzila;^Opera" will likely cover what we need,

          Show
          lmccay Larry McCay added a comment - Looks to me like "^Mozzila;^Opera" will likely cover what we need,
          Hide
          lmccay Larry McCay added a comment -

          From what I can tell, Mozilla will match not only browsers but also some bots/spiders. I don't imagine that this overlap is really an issue though. Do we need to allow for crawling of webpages without CSRF protection?

          I am going to move forward with a semi colon separate list of regex patterns like:

          <property>
          <name>

          Unknown macro: {component.prefix}

          .browser.useragents.regex</name>
          <value>^Mozzila.;^Opera.</value>
          <description>Regex patterns for matching browser user-agents</description>
          </property>

          Show
          lmccay Larry McCay added a comment - From what I can tell, Mozilla will match not only browsers but also some bots/spiders. I don't imagine that this overlap is really an issue though. Do we need to allow for crawling of webpages without CSRF protection? I am going to move forward with a semi colon separate list of regex patterns like: <property> <name> Unknown macro: {component.prefix} .browser.useragents.regex</name> <value>^Mozzila. ;^Opera. </value> <description>Regex patterns for matching browser user-agents</description> </property>
          Hide
          cnauroth Chris Nauroth added a comment -

          Larry McCay, the plan sounds good.

          Do we need to allow for crawling of webpages without CSRF protection?

          I expect it's not an issue, given that our use case covers internal back-end web services, and not public Internet sites that would benefit from allowing crawlers. If anyone really is crawling Hadoop web services, then I expect they have control of that client and so can either code it to set the header or reconfigure the regex to allow it.

          Show
          cnauroth Chris Nauroth added a comment - Larry McCay , the plan sounds good. Do we need to allow for crawling of webpages without CSRF protection? I expect it's not an issue, given that our use case covers internal back-end web services, and not public Internet sites that would benefit from allowing crawlers. If anyone really is crawling Hadoop web services, then I expect they have control of that client and so can either code it to set the header or reconfigure the regex to allow it.
          Hide
          lmccay Larry McCay added a comment -

          Changed the check on user-agent from a list of non-browsers to a list of regex patterns to match browsers. This addresses issues of inconsistently breaking some non-browser clients due to not being able to expect a particular user-agent from them to check for.

          Show
          lmccay Larry McCay added a comment - Changed the check on user-agent from a list of non-browsers to a list of regex patterns to match browsers. This addresses issues of inconsistently breaking some non-browser clients due to not being able to expect a particular user-agent from them to check for.
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 11s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 8s Maven dependency ordering for branch
          +1 mvninstall 6m 53s trunk passed
          +1 compile 5m 58s trunk passed with JDK v1.8.0_72
          +1 compile 7m 1s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 19s trunk passed
          +1 mvnsite 1m 2s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 1m 33s trunk passed
          +1 javadoc 0m 59s trunk passed with JDK v1.8.0_72
          +1 javadoc 1m 5s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 7s Maven dependency ordering for patch
          +1 mvninstall 0m 42s the patch passed
          +1 compile 7m 4s the patch passed with JDK v1.8.0_72
          +1 javac 7m 4s the patch passed
          +1 compile 6m 57s the patch passed with JDK v1.7.0_95
          +1 javac 6m 57s the patch passed
          +1 checkstyle 0m 22s the patch passed
          +1 mvnsite 1m 3s the patch passed
          +1 mvneclipse 0m 17s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 48s the patch passed
          +1 javadoc 0m 56s the patch passed with JDK v1.8.0_72
          +1 javadoc 1m 9s the patch passed with JDK v1.7.0_95
          +1 unit 7m 16s hadoop-common in the patch passed with JDK v1.8.0_72.
          +1 unit 7m 32s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 21s Patch does not generate ASF License warnings.
          62m 12s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12786519/HADOOP-12758-003.patch
          JIRA Issue HADOOP-12758
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 2c856be592c2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 4e5e1c0
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8550/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Max memory used 77MB
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8550/console
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 8s Maven dependency ordering for branch +1 mvninstall 6m 53s trunk passed +1 compile 5m 58s trunk passed with JDK v1.8.0_72 +1 compile 7m 1s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 19s trunk passed +1 mvnsite 1m 2s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 33s trunk passed +1 javadoc 0m 59s trunk passed with JDK v1.8.0_72 +1 javadoc 1m 5s trunk passed with JDK v1.7.0_95 0 mvndep 0m 7s Maven dependency ordering for patch +1 mvninstall 0m 42s the patch passed +1 compile 7m 4s the patch passed with JDK v1.8.0_72 +1 javac 7m 4s the patch passed +1 compile 6m 57s the patch passed with JDK v1.7.0_95 +1 javac 6m 57s the patch passed +1 checkstyle 0m 22s the patch passed +1 mvnsite 1m 3s the patch passed +1 mvneclipse 0m 17s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 48s the patch passed +1 javadoc 0m 56s the patch passed with JDK v1.8.0_72 +1 javadoc 1m 9s the patch passed with JDK v1.7.0_95 +1 unit 7m 16s hadoop-common in the patch passed with JDK v1.8.0_72. +1 unit 7m 32s hadoop-common in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 21s Patch does not generate ASF License warnings. 62m 12s Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12786519/HADOOP-12758-003.patch JIRA Issue HADOOP-12758 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 2c856be592c2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4e5e1c0 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8550/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Max memory used 77MB Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8550/console Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          Hi Larry McCay. This looks good. Just a few comments:

          1. There is existing precedent for use of comma as the delimiter in multi-valued configuration properties. Using comma also would let this code use convenience methods like Configuration#getTrimmedStrings. Is there a reason that semicolon works better here?
          2. Let's store browserUserAgents as a Pattern[] at initialization time. That way, we won't need to recompile on every HTTP request.
          Show
          cnauroth Chris Nauroth added a comment - Hi Larry McCay . This looks good. Just a few comments: There is existing precedent for use of comma as the delimiter in multi-valued configuration properties. Using comma also would let this code use convenience methods like Configuration#getTrimmedStrings . Is there a reason that semicolon works better here? Let's store browserUserAgents as a Pattern[] at initialization time. That way, we won't need to recompile on every HTTP request.
          Hide
          lmccay Larry McCay added a comment -

          Fair enough - I'll make those quick changes and resubmit.
          Thanks for the review!

          Show
          lmccay Larry McCay added a comment - Fair enough - I'll make those quick changes and resubmit. Thanks for the review!
          Hide
          lmccay Larry McCay added a comment -

          Changed to a comma delimited list of regexs and precompiled Patterns stored as member variable on init. Also added additional test for custom browserUserAgents list.

          Show
          lmccay Larry McCay added a comment - Changed to a comma delimited list of regexs and precompiled Patterns stored as member variable on init. Also added additional test for custom browserUserAgents list.
          Hide
          cnauroth Chris Nauroth added a comment -

          +1 for patch v004, pending pre-commit.

          Show
          cnauroth Chris Nauroth added a comment - +1 for patch v004, pending pre-commit.
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 12s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 10s Maven dependency ordering for branch
          +1 mvninstall 6m 44s trunk passed
          +1 compile 6m 21s trunk passed with JDK v1.8.0_72
          +1 compile 6m 38s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 20s trunk passed
          +1 mvnsite 1m 3s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 1m 32s trunk passed
          +1 javadoc 0m 52s trunk passed with JDK v1.8.0_72
          +1 javadoc 1m 2s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 8s Maven dependency ordering for patch
          +1 mvninstall 0m 39s the patch passed
          +1 compile 5m 38s the patch passed with JDK v1.8.0_72
          +1 javac 5m 38s the patch passed
          +1 compile 6m 32s the patch passed with JDK v1.7.0_95
          +1 javac 6m 32s the patch passed
          +1 checkstyle 0m 21s the patch passed
          +1 mvnsite 1m 0s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 1m 45s the patch passed
          +1 javadoc 0m 50s the patch passed with JDK v1.8.0_72
          +1 javadoc 1m 2s the patch passed with JDK v1.7.0_95
          +1 unit 7m 3s hadoop-common in the patch passed with JDK v1.8.0_72.
          +1 unit 7m 22s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 23s Patch does not generate ASF License warnings.
          59m 14s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12786560/HADOOP-12758-004.patch
          JIRA Issue HADOOP-12758
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 849c962d835f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9086dd5
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8552/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Max memory used 77MB
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8552/console
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 12s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 10s Maven dependency ordering for branch +1 mvninstall 6m 44s trunk passed +1 compile 6m 21s trunk passed with JDK v1.8.0_72 +1 compile 6m 38s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 20s trunk passed +1 mvnsite 1m 3s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 1m 32s trunk passed +1 javadoc 0m 52s trunk passed with JDK v1.8.0_72 +1 javadoc 1m 2s trunk passed with JDK v1.7.0_95 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 0m 39s the patch passed +1 compile 5m 38s the patch passed with JDK v1.8.0_72 +1 javac 5m 38s the patch passed +1 compile 6m 32s the patch passed with JDK v1.7.0_95 +1 javac 6m 32s the patch passed +1 checkstyle 0m 21s the patch passed +1 mvnsite 1m 0s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 1m 45s the patch passed +1 javadoc 0m 50s the patch passed with JDK v1.8.0_72 +1 javadoc 1m 2s the patch passed with JDK v1.7.0_95 +1 unit 7m 3s hadoop-common in the patch passed with JDK v1.8.0_72. +1 unit 7m 22s hadoop-common in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 23s Patch does not generate ASF License warnings. 59m 14s Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12786560/HADOOP-12758-004.patch JIRA Issue HADOOP-12758 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 849c962d835f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9086dd5 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8552/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Max memory used 77MB Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8552/console Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          I have committed this to trunk, branch-2 and branch-2.8. Larry McCay, thank you for contributing the patch. Anu Engineer, thank you for participating on the code review.

          Show
          cnauroth Chris Nauroth added a comment - I have committed this to trunk, branch-2 and branch-2.8. Larry McCay , thank you for contributing the patch. Anu Engineer , thank you for participating on the code review.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9251 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9251/)
          HADOOP-12758. Extend CSRF Filter with UserAgent Checks. Contributed by (cnauroth: rev a37e423e8407c42988577d87907d13ce0432dda1)

          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/http/TestRestCsrfPreventionFilter.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/RestCsrfPreventionFilter.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9251 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9251/ ) HADOOP-12758 . Extend CSRF Filter with UserAgent Checks. Contributed by (cnauroth: rev a37e423e8407c42988577d87907d13ce0432dda1) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/http/TestRestCsrfPreventionFilter.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/RestCsrfPreventionFilter.java

            People

            • Assignee:
              lmccay Larry McCay
              Reporter:
              lmccay Larry McCay
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development