Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-9711

Integrate CSRF prevention filter in WebHDFS.

    Details

    • Target Version/s:
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      WebHDFS now supports options to enforce cross-site request forgery (CSRF) prevention for HTTP requests to both the NameNode and the DataNode. Please refer to the updated WebHDFS documentation for a description of this feature and further details on how to configure it.
      Show
      WebHDFS now supports options to enforce cross-site request forgery (CSRF) prevention for HTTP requests to both the NameNode and the DataNode. Please refer to the updated WebHDFS documentation for a description of this feature and further details on how to configure it.

      Description

      HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard against cross-site request forgery attacks. This issue tracks integration of that filter in WebHDFS.

      1. HDFS-9711.001.patch
        42 kB
        Chris Nauroth
      2. HDFS-9711.002.patch
        42 kB
        Chris Nauroth
      3. HDFS-9711.003.patch
        45 kB
        Chris Nauroth
      4. HDFS-9711.004.patch
        47 kB
        Chris Nauroth
      5. HDFS-9711.005.patch
        48 kB
        Chris Nauroth
      6. HDFS-9711-branch-2.005.patch
        49 kB
        Chris Nauroth

        Issue Links

          Activity

          Hide
          cnauroth Chris Nauroth added a comment -

          This patch integrates WebHDFS with the CSRF prevention filter added in HADOOP-12691. Here is a summary of the changes.

          • RestCsrfPreventionFilter has been changed to expose some of its filtering logic in public methods. This was required to faciliate integration with the DataNode, which implements its WebHDFS endpoint using Netty instead of the servlet API. I also added a convenience method that will help a lot of components bootstrap the configuration.
          • HdfsClientConfigKeys and hdfs-default.xml have new configuration properties for controlling the specifics of the filter.
          • WebHdfsFileSystem sends the custom header if necessary.
          • NameNodeHttpServer sets up the filter configuration and calls HttpServer2#defineFilter. I expect this is what integration of the CSRF filter will look like for the vast majority of components that base their HTTP server on the HttpServer2 class.
          • DatanodeHttpServer is a little trickier. We need to stub just enough of the servlet API for filter initialization. The fully initialized filter gets passed along to RestCsrfPreventionFilterHandler for subsequent use in the Netty pipeline.
          • The NameNode web UI file explorer discovers if CSRF prevention is enabled, and if so, sets up subsequent jQuery AJAX calls to include the custom header. This logic is in a separate rest-csrf.js module in case it ever needs to be reused outside the file explorer.
          • TestWebHdfsWithRestCsrfPreventionFilter is a new test suite that covers various possible configuration combinations, for a GET, a PUT, a DELETE, and a POST.
          • WebHDFS documentation has been updated.

          In addition to the new test suite, I manually tested the NameNode web UI, various hdfs dfs commands with "webhdfs:" URIs, and DistCp.

          Show
          cnauroth Chris Nauroth added a comment - This patch integrates WebHDFS with the CSRF prevention filter added in HADOOP-12691 . Here is a summary of the changes. RestCsrfPreventionFilter has been changed to expose some of its filtering logic in public methods. This was required to faciliate integration with the DataNode, which implements its WebHDFS endpoint using Netty instead of the servlet API. I also added a convenience method that will help a lot of components bootstrap the configuration. HdfsClientConfigKeys and hdfs-default.xml have new configuration properties for controlling the specifics of the filter. WebHdfsFileSystem sends the custom header if necessary. NameNodeHttpServer sets up the filter configuration and calls HttpServer2#defineFilter . I expect this is what integration of the CSRF filter will look like for the vast majority of components that base their HTTP server on the HttpServer2 class. DatanodeHttpServer is a little trickier. We need to stub just enough of the servlet API for filter initialization. The fully initialized filter gets passed along to RestCsrfPreventionFilterHandler for subsequent use in the Netty pipeline. The NameNode web UI file explorer discovers if CSRF prevention is enabled, and if so, sets up subsequent jQuery AJAX calls to include the custom header. This logic is in a separate rest-csrf.js module in case it ever needs to be reused outside the file explorer. TestWebHdfsWithRestCsrfPreventionFilter is a new test suite that covers various possible configuration combinations, for a GET, a PUT, a DELETE, and a POST. WebHDFS documentation has been updated. In addition to the new test suite, I manually tested the NameNode web UI, various hdfs dfs commands with "webhdfs:" URIs, and DistCp.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 31s Maven dependency ordering for branch
          +1 mvninstall 6m 50s trunk passed
          +1 compile 6m 47s trunk passed with JDK v1.8.0_66
          +1 compile 7m 13s trunk passed with JDK v1.7.0_91
          +1 checkstyle 1m 7s trunk passed
          +1 mvnsite 2m 37s trunk passed
          +1 mvneclipse 0m 41s trunk passed
          +1 findbugs 5m 33s trunk passed
          +1 javadoc 2m 18s trunk passed with JDK v1.8.0_66
          +1 javadoc 3m 15s trunk passed with JDK v1.7.0_91
          0 mvndep 0m 24s Maven dependency ordering for patch
          +1 mvninstall 1m 55s the patch passed
          +1 compile 6m 28s the patch passed with JDK v1.8.0_66
          -1 javac 8m 47s root-jdk1.8.0_66 with JDK v1.8.0_66 generated 1 new + 739 unchanged - 0 fixed = 740 total (was 739)
          +1 javac 6m 28s the patch passed
          +1 compile 6m 51s the patch passed with JDK v1.7.0_91
          -1 javac 15m 38s root-jdk1.7.0_91 with JDK v1.7.0_91 generated 1 new + 735 unchanged - 0 fixed = 736 total (was 735)
          +1 javac 6m 51s the patch passed
          +1 checkstyle 1m 5s root: patch generated 0 new + 108 unchanged - 5 fixed = 108 total (was 113)
          +1 mvnsite 2m 28s the patch passed
          +1 mvneclipse 0m 40s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          +1 findbugs 6m 10s the patch passed
          +1 javadoc 2m 17s the patch passed with JDK v1.8.0_66
          +1 javadoc 3m 9s the patch passed with JDK v1.7.0_91
          +1 unit 6m 35s hadoop-common in the patch passed with JDK v1.8.0_66.
          +1 unit 0m 51s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66.
          -1 unit 52m 18s hadoop-hdfs in the patch failed with JDK v1.8.0_66.
          +1 unit 6m 48s hadoop-common in the patch passed with JDK v1.7.0_91.
          +1 unit 0m 58s hadoop-hdfs-client in the patch passed with JDK v1.7.0_91.
          -1 unit 52m 38s hadoop-hdfs in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 25s Patch does not generate ASF License warnings.
          190m 34s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency
            hadoop.hdfs.server.namenode.TestDecommissioningStatus
            hadoop.hdfs.server.datanode.TestBlockScanner
          JDK v1.7.0_91 Failed junit tests hadoop.fs.viewfs.TestViewFileSystemHdfs



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784722/HDFS-9711.001.patch
          JIRA Issue HDFS-9711
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux 78e7fe92c3d2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / fb238d7
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          javac root-jdk1.8.0_66: https://builds.apache.org/job/PreCommit-HDFS-Build/14268/artifact/patchprocess/diff-compile-javac-root-jdk1.8.0_66.txt
          javac root-jdk1.7.0_91: https://builds.apache.org/job/PreCommit-HDFS-Build/14268/artifact/patchprocess/diff-compile-javac-root-jdk1.7.0_91.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14268/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14268/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14268/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14268/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14268/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Max memory used 76MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14268/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 31s Maven dependency ordering for branch +1 mvninstall 6m 50s trunk passed +1 compile 6m 47s trunk passed with JDK v1.8.0_66 +1 compile 7m 13s trunk passed with JDK v1.7.0_91 +1 checkstyle 1m 7s trunk passed +1 mvnsite 2m 37s trunk passed +1 mvneclipse 0m 41s trunk passed +1 findbugs 5m 33s trunk passed +1 javadoc 2m 18s trunk passed with JDK v1.8.0_66 +1 javadoc 3m 15s trunk passed with JDK v1.7.0_91 0 mvndep 0m 24s Maven dependency ordering for patch +1 mvninstall 1m 55s the patch passed +1 compile 6m 28s the patch passed with JDK v1.8.0_66 -1 javac 8m 47s root-jdk1.8.0_66 with JDK v1.8.0_66 generated 1 new + 739 unchanged - 0 fixed = 740 total (was 739) +1 javac 6m 28s the patch passed +1 compile 6m 51s the patch passed with JDK v1.7.0_91 -1 javac 15m 38s root-jdk1.7.0_91 with JDK v1.7.0_91 generated 1 new + 735 unchanged - 0 fixed = 736 total (was 735) +1 javac 6m 51s the patch passed +1 checkstyle 1m 5s root: patch generated 0 new + 108 unchanged - 5 fixed = 108 total (was 113) +1 mvnsite 2m 28s the patch passed +1 mvneclipse 0m 40s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 0s The patch has no ill-formed XML file. +1 findbugs 6m 10s the patch passed +1 javadoc 2m 17s the patch passed with JDK v1.8.0_66 +1 javadoc 3m 9s the patch passed with JDK v1.7.0_91 +1 unit 6m 35s hadoop-common in the patch passed with JDK v1.8.0_66. +1 unit 0m 51s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66. -1 unit 52m 18s hadoop-hdfs in the patch failed with JDK v1.8.0_66. +1 unit 6m 48s hadoop-common in the patch passed with JDK v1.7.0_91. +1 unit 0m 58s hadoop-hdfs-client in the patch passed with JDK v1.7.0_91. -1 unit 52m 38s hadoop-hdfs in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 25s Patch does not generate ASF License warnings. 190m 34s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency   hadoop.hdfs.server.namenode.TestDecommissioningStatus   hadoop.hdfs.server.datanode.TestBlockScanner JDK v1.7.0_91 Failed junit tests hadoop.fs.viewfs.TestViewFileSystemHdfs Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784722/HDFS-9711.001.patch JIRA Issue HDFS-9711 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 78e7fe92c3d2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / fb238d7 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 javac root-jdk1.8.0_66: https://builds.apache.org/job/PreCommit-HDFS-Build/14268/artifact/patchprocess/diff-compile-javac-root-jdk1.8.0_66.txt javac root-jdk1.7.0_91: https://builds.apache.org/job/PreCommit-HDFS-Build/14268/artifact/patchprocess/diff-compile-javac-root-jdk1.7.0_91.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14268/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14268/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14268/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14268/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14268/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14268/console This message was automatically generated.
          Hide
          anu Anu Engineer added a comment -

          +1, ( Non-binding) Thanks for contributing this patch Chris Nauroth.

          Some nitpicks ( Feel free to ignore them, just wanted to flag it for your attention.)

          1. In {{NamenodeHTTPServer.Java#initWebHdfs} you might want to log the fact the CSRF protection is enabled on the Namenode side.
          2. In WebHDFS.md, would you please add an example with how the curl commands can be send when custom header is X-XSRF-HEADER.
          Show
          anu Anu Engineer added a comment - +1, ( Non-binding) Thanks for contributing this patch Chris Nauroth . Some nitpicks ( Feel free to ignore them, just wanted to flag it for your attention.) In {{NamenodeHTTPServer.Java#initWebHdfs} you might want to log the fact the CSRF protection is enabled on the Namenode side. In WebHDFS.md, would you please add an example with how the curl commands can be send when custom header is X-XSRF-HEADER.
          Hide
          cnauroth Chris Nauroth added a comment -

          Anu Engineer, thank you for the code review. I am attaching patch v002 with the following changes:

          1. Correct the deprecation warning flagged by pre-commit.
          2. Add a curl example to the docs, showing usage of the -H option to pass the header.

          In NamenodeHTTPServer.Java#initWebHdfs you might want to log the fact the CSRF protection is enabled on the Namenode side.

          I have been relying on the initialization log message from within RestCsrfPreventionFilter, which would cover any daemon that integrates with the filter. This is what the message looks like:

          16/01/28 12:18:27 INFO http.RestCsrfPreventionFilter: Adding cross-site request forgery (CSRF) protection, headerName = X-XSRF-HEADER, methodsToIgnore = [GET, OPTIONS, HEAD, TRACE]
          

          Is that sufficient, or did you have something else in mind?

          Show
          cnauroth Chris Nauroth added a comment - Anu Engineer , thank you for the code review. I am attaching patch v002 with the following changes: Correct the deprecation warning flagged by pre-commit. Add a curl example to the docs, showing usage of the -H option to pass the header. In NamenodeHTTPServer.Java#initWebHdfs you might want to log the fact the CSRF protection is enabled on the Namenode side. I have been relying on the initialization log message from within RestCsrfPreventionFilter , which would cover any daemon that integrates with the filter. This is what the message looks like: 16/01/28 12:18:27 INFO http.RestCsrfPreventionFilter: Adding cross-site request forgery (CSRF) protection, headerName = X-XSRF-HEADER, methodsToIgnore = [GET, OPTIONS, HEAD, TRACE] Is that sufficient, or did you have something else in mind?
          Hide
          anu Anu Engineer added a comment -

          it looks perfect, thanks

          Show
          anu Anu Engineer added a comment - it looks perfect, thanks
          Hide
          wheat9 Haohui Mai added a comment -

          I don't think it needs to be configurable. The only thing needs to be done should be making sure that the 307 header generated by the NN has the CSRF header.

          Show
          wheat9 Haohui Mai added a comment - I don't think it needs to be configurable. The only thing needs to be done should be making sure that the 307 header generated by the NN has the CSRF header.
          Hide
          cnauroth Chris Nauroth added a comment -

          I'm not aware of any way for an HTTP redirect response to tell the client to add another custom header before sending the second request (aside from setting a cookie). Even if there was a way, it would defeat the intent of blocking the request. Something like a malicious HTML form using POST would pick up the header on the redirect and then succeed.

          This has been made configurable for backwards-compatibility with clients that might not be prepared to deal with the custom header.

          Show
          cnauroth Chris Nauroth added a comment - I'm not aware of any way for an HTTP redirect response to tell the client to add another custom header before sending the second request (aside from setting a cookie). Even if there was a way, it would defeat the intent of blocking the request. Something like a malicious HTML form using POST would pick up the header on the redirect and then succeed. This has been made configurable for backwards-compatibility with clients that might not be prepared to deal with the custom header.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 45s Maven dependency ordering for branch
          +1 mvninstall 12m 25s trunk passed
          +1 compile 17m 59s trunk passed with JDK v1.8.0_66
          +1 compile 12m 44s trunk passed with JDK v1.7.0_91
          +1 checkstyle 1m 38s trunk passed
          +1 mvnsite 4m 29s trunk passed
          +1 mvneclipse 1m 0s trunk passed
          +1 findbugs 9m 16s trunk passed
          +1 javadoc 4m 53s trunk passed with JDK v1.8.0_66
          +1 javadoc 5m 39s trunk passed with JDK v1.7.0_91
          0 mvndep 0m 39s Maven dependency ordering for patch
          +1 mvninstall 3m 29s the patch passed
          +1 compile 17m 32s the patch passed with JDK v1.8.0_66
          +1 javac 17m 32s the patch passed
          +1 compile 13m 12s the patch passed with JDK v1.7.0_91
          +1 javac 13m 12s the patch passed
          -1 checkstyle 1m 36s root: patch generated 1 new + 107 unchanged - 5 fixed = 108 total (was 112)
          +1 mvnsite 4m 7s the patch passed
          +1 mvneclipse 0m 57s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 findbugs 10m 1s the patch passed
          +1 javadoc 4m 46s the patch passed with JDK v1.8.0_66
          +1 javadoc 5m 23s the patch passed with JDK v1.7.0_91
          -1 unit 14m 1s hadoop-common in the patch failed with JDK v1.8.0_66.
          +1 unit 1m 45s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66.
          -1 unit 162m 30s hadoop-hdfs in the patch failed with JDK v1.8.0_66.
          -1 unit 17m 58s hadoop-common in the patch failed with JDK v1.7.0_91.
          +1 unit 3m 11s hadoop-hdfs-client in the patch passed with JDK v1.7.0_91.
          -1 unit 1m 36s hadoop-hdfs in the patch failed with JDK v1.7.0_91.
          +1 asflicense 1m 1s Patch does not generate ASF License warnings.
          337m 6s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.fs.shell.find.TestPrint
            hadoop.fs.shell.find.TestIname
            hadoop.fs.shell.find.TestName
            hadoop.ipc.TestRPCWaitForProxy
            hadoop.hdfs.server.datanode.TestBlockScanner
            hadoop.tracing.TestTracing
            hadoop.hdfs.server.namenode.ha.TestEditLogTailer
            hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
            hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
            hadoop.hdfs.security.TestDelegationTokenForProxyUser
            hadoop.hdfs.TestLocalDFS
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
            hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
            hadoop.hdfs.tools.TestDFSAdminWithHA
            hadoop.hdfs.server.datanode.TestBlockReplacement
            hadoop.fs.TestSymlinkHdfsFileContext
            hadoop.hdfs.server.namenode.TestDecommissioningStatus
            hadoop.hdfs.qjournal.TestSecureNNWithQJM
            hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider
            hadoop.hdfs.server.datanode.TestDirectoryScanner
          JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.hdfs.TestLeaseRecovery2
          JDK v1.7.0_91 Failed junit tests hadoop.io.compress.TestCodecPool
            hadoop.fs.shell.find.TestPrint
            hadoop.fs.shell.find.TestPrint0
            hadoop.fs.shell.find.TestIname
            hadoop.fs.shell.find.TestName
            hadoop.ipc.TestRPCWaitForProxy



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12785013/HDFS-9711.002.patch
          JIRA Issue HDFS-9711
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux 45b0e1df8175 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 61382ff
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14280/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Max memory used 76MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14280/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 45s Maven dependency ordering for branch +1 mvninstall 12m 25s trunk passed +1 compile 17m 59s trunk passed with JDK v1.8.0_66 +1 compile 12m 44s trunk passed with JDK v1.7.0_91 +1 checkstyle 1m 38s trunk passed +1 mvnsite 4m 29s trunk passed +1 mvneclipse 1m 0s trunk passed +1 findbugs 9m 16s trunk passed +1 javadoc 4m 53s trunk passed with JDK v1.8.0_66 +1 javadoc 5m 39s trunk passed with JDK v1.7.0_91 0 mvndep 0m 39s Maven dependency ordering for patch +1 mvninstall 3m 29s the patch passed +1 compile 17m 32s the patch passed with JDK v1.8.0_66 +1 javac 17m 32s the patch passed +1 compile 13m 12s the patch passed with JDK v1.7.0_91 +1 javac 13m 12s the patch passed -1 checkstyle 1m 36s root: patch generated 1 new + 107 unchanged - 5 fixed = 108 total (was 112) +1 mvnsite 4m 7s the patch passed +1 mvneclipse 0m 57s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 10m 1s the patch passed +1 javadoc 4m 46s the patch passed with JDK v1.8.0_66 +1 javadoc 5m 23s the patch passed with JDK v1.7.0_91 -1 unit 14m 1s hadoop-common in the patch failed with JDK v1.8.0_66. +1 unit 1m 45s hadoop-hdfs-client in the patch passed with JDK v1.8.0_66. -1 unit 162m 30s hadoop-hdfs in the patch failed with JDK v1.8.0_66. -1 unit 17m 58s hadoop-common in the patch failed with JDK v1.7.0_91. +1 unit 3m 11s hadoop-hdfs-client in the patch passed with JDK v1.7.0_91. -1 unit 1m 36s hadoop-hdfs in the patch failed with JDK v1.7.0_91. +1 asflicense 1m 1s Patch does not generate ASF License warnings. 337m 6s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.fs.shell.find.TestPrint   hadoop.fs.shell.find.TestIname   hadoop.fs.shell.find.TestName   hadoop.ipc.TestRPCWaitForProxy   hadoop.hdfs.server.datanode.TestBlockScanner   hadoop.tracing.TestTracing   hadoop.hdfs.server.namenode.ha.TestEditLogTailer   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA   hadoop.hdfs.security.TestDelegationTokenForProxyUser   hadoop.hdfs.TestLocalDFS   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure   hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints   hadoop.hdfs.tools.TestDFSAdminWithHA   hadoop.hdfs.server.datanode.TestBlockReplacement   hadoop.fs.TestSymlinkHdfsFileContext   hadoop.hdfs.server.namenode.TestDecommissioningStatus   hadoop.hdfs.qjournal.TestSecureNNWithQJM   hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider   hadoop.hdfs.server.datanode.TestDirectoryScanner JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.hdfs.TestLeaseRecovery2 JDK v1.7.0_91 Failed junit tests hadoop.io.compress.TestCodecPool   hadoop.fs.shell.find.TestPrint   hadoop.fs.shell.find.TestPrint0   hadoop.fs.shell.find.TestIname   hadoop.fs.shell.find.TestName   hadoop.ipc.TestRPCWaitForProxy Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12785013/HDFS-9711.002.patch JIRA Issue HDFS-9711 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 45b0e1df8175 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 61382ff Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14280/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14280/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14280/console This message was automatically generated.
          Hide
          wheat9 Haohui Mai added a comment -

          Even if there was a way, it would defeat the intent of blocking the request. Something like a malicious HTML form using POST would pick up the header on the redirect and then succeed.

          It sounds like that we're talking about different things. Can you please walk me through what happens if the client makes a WebHDFS request to the NN?

          Show
          wheat9 Haohui Mai added a comment - Even if there was a way, it would defeat the intent of blocking the request. Something like a malicious HTML form using POST would pick up the header on the redirect and then succeed. It sounds like that we're talking about different things. Can you please walk me through what happens if the client makes a WebHDFS request to the NN?
          Hide
          cnauroth Chris Nauroth added a comment -

          Haohui Mai, sure thing.

          There is a more specific design document from Larry McCay attached to the linked HADOOP-12691 issue. The scope of that issue is to provide an option for prevention of browser-based CSRF attacks across the stack. The attack vector described in that document is injection of an <img> tag to force a malicious HTTP GET. That scenario isn't directly relevant to WebHDFS, because WebHDFS doesn't implement mutating operations over GET, so I'll walk through a specific WebHDFS example.

          Suppose an attacker hosts a malicious web form on a domain under his control. The form uses the POST action targeting a WebHDFS truncate URL. Through social engineering, the attacker tricks an authenticated user into accessing the form and submitting it.

          Without CSRF prevention enabled:

          1. The browser sends the HTTP POST request to the NameNode.
          2. The NameNode sends a 307 redirect response targeting a DataNode.
          3. The browser repeats the same request to the DataNode URL.
          4. At the DataNode, the call executes and truncates the file.

          With CSRF prevention enabled:

          1. The browser sends the HTTP POST request to the NameNode.
          2. The NameNode sends a 400 response with reason "Missing Required Header for Vulnerability Protection".
          3. The truncation doesn't happen.

          Since an HTML form cannot set arbitrary custom headers, this attack cannot be altered to bypass the prevention. It is possible that the attacker's form could auto-submit an AJAX request that sets arbitrary headers, but in that case, we expect the browser's single origin policy to block the request.

          The same enforcement is applied at the DataNode too, so if the attacker somehow manages to bypass the NameNode redirect step, it would still be blocked.

          Both the web UI and WebHdfsFileSystem will send the required header if CSRF prevention is enabled, so they will continue to function normally. Custom clients and scripting such as curl calls might need modifications to send the header, so this feature needs to be disabled by default for backwards-compatibility. The scope of HADOOP-12691 specifically targets browser-based attacks.

          Show
          cnauroth Chris Nauroth added a comment - Haohui Mai , sure thing. There is a more specific design document from Larry McCay attached to the linked HADOOP-12691 issue. The scope of that issue is to provide an option for prevention of browser-based CSRF attacks across the stack. The attack vector described in that document is injection of an <img> tag to force a malicious HTTP GET. That scenario isn't directly relevant to WebHDFS, because WebHDFS doesn't implement mutating operations over GET, so I'll walk through a specific WebHDFS example. Suppose an attacker hosts a malicious web form on a domain under his control. The form uses the POST action targeting a WebHDFS truncate URL. Through social engineering, the attacker tricks an authenticated user into accessing the form and submitting it. Without CSRF prevention enabled: The browser sends the HTTP POST request to the NameNode. The NameNode sends a 307 redirect response targeting a DataNode. The browser repeats the same request to the DataNode URL. At the DataNode, the call executes and truncates the file. With CSRF prevention enabled: The browser sends the HTTP POST request to the NameNode. The NameNode sends a 400 response with reason "Missing Required Header for Vulnerability Protection". The truncation doesn't happen. Since an HTML form cannot set arbitrary custom headers, this attack cannot be altered to bypass the prevention. It is possible that the attacker's form could auto-submit an AJAX request that sets arbitrary headers, but in that case, we expect the browser's single origin policy to block the request. The same enforcement is applied at the DataNode too, so if the attacker somehow manages to bypass the NameNode redirect step, it would still be blocked. Both the web UI and WebHdfsFileSystem will send the required header if CSRF prevention is enabled, so they will continue to function normally. Custom clients and scripting such as curl calls might need modifications to send the header, so this feature needs to be disabled by default for backwards-compatibility. The scope of HADOOP-12691 specifically targets browser-based attacks.
          Hide
          lmccay Larry McCay added a comment -

          Excellent summary, Chris Nauroth!

          Show
          lmccay Larry McCay added a comment - Excellent summary, Chris Nauroth !
          Hide
          cnauroth Chris Nauroth added a comment -

          The checkstyle warning is on a missing package-info.java file. I'm not going to address it, because we aren't generally creating package-info.java files arounds things that are not public APIs. I fixed several other pre-existing checkstyle warnings as part of the patch.

          I cannot reproduce the test failures from the last run.

          Show
          cnauroth Chris Nauroth added a comment - The checkstyle warning is on a missing package-info.java file. I'm not going to address it, because we aren't generally creating package-info.java files arounds things that are not public APIs. I fixed several other pre-existing checkstyle warnings as part of the patch. I cannot reproduce the test failures from the last run.
          Hide
          wheat9 Haohui Mai added a comment -

          Thanks for the explanation, Chris Nauroth!

          I agree that there should be a configuration to turn on and off this feature on the server side.

          However, the name of the header is part of the contract thus it should not be configurable. The client can always send the header.

          Show
          wheat9 Haohui Mai added a comment - Thanks for the explanation, Chris Nauroth ! I agree that there should be a configuration to turn on and off this feature on the server side. However, the name of the header is part of the contract thus it should not be configurable. The client can always send the header.
          Hide
          cnauroth Chris Nauroth added a comment -

          However, the name of the header is part of the contract thus it should not be configurable. The client can always send the header.

          Yes, that makes sense, probably X-XSRF-HDFS.

          Bringing in Larry McCay once again too... Larry, what was the intent behind making the header name configurable in HADOOP-12691? Was that just so that different components could use different header names, like X-XSRF-HDFS vs. X-XSRF-YARN, or did you think there was a reason that individual deployments might need to customize the header name? If the former, then I'm going to remove the configurability at the HDFS layer and just always use X-XSRF-HDFS for the header name.

          Show
          cnauroth Chris Nauroth added a comment - However, the name of the header is part of the contract thus it should not be configurable. The client can always send the header. Yes, that makes sense, probably X-XSRF-HDFS. Bringing in Larry McCay once again too... Larry, what was the intent behind making the header name configurable in HADOOP-12691 ? Was that just so that different components could use different header names, like X-XSRF-HDFS vs. X-XSRF-YARN, or did you think there was a reason that individual deployments might need to customize the header name? If the former, then I'm going to remove the configurability at the HDFS layer and just always use X-XSRF-HDFS for the header name.
          Hide
          lmccay Larry McCay added a comment -

          Heads up, Chris Nauroth - my latest patch changed the error messages slightly in order to provide more clarity of the vulnerability check that is in violation. Changed "Missing Required Header for Vulnerability Protection" to "Missing Required Header for CSRF Vulnerability Protection".

          Sorry for the inconvenience. If this is troublesome or you don't feel it is needed, I can revert that change.

          The configurability of the header name, I think is just a general convenience. Some shops have very strict guidelines on what they do for certain things. If they wanted to always use the same header for CSRF protection for the convenience of the app developers then they could configure the CSRF filter across the platform to expect the same header. If a shop has some notion that headers per component makes sense then they could do that as well. Otherwise, I would have expected the default to be used.

          From a platform perspective, I would rather the same header be used across the board so as not to put too much of a burden on an app that must communicate with many - maybe like Ambari might have to? Finding out and keeping track of the header name for each component in every deployment may be a lot.

          The fact of the matter is that it really doesn't matter from a security perspective what the name is as long as it is what is configured for the filter enforcing the CSRF filter. We are really just ensuring that the request is coming from a client that has the ability to set a header. We just have to know what name to look for.

          Show
          lmccay Larry McCay added a comment - Heads up, Chris Nauroth - my latest patch changed the error messages slightly in order to provide more clarity of the vulnerability check that is in violation. Changed "Missing Required Header for Vulnerability Protection" to "Missing Required Header for CSRF Vulnerability Protection". Sorry for the inconvenience. If this is troublesome or you don't feel it is needed, I can revert that change. The configurability of the header name, I think is just a general convenience. Some shops have very strict guidelines on what they do for certain things. If they wanted to always use the same header for CSRF protection for the convenience of the app developers then they could configure the CSRF filter across the platform to expect the same header. If a shop has some notion that headers per component makes sense then they could do that as well. Otherwise, I would have expected the default to be used. From a platform perspective, I would rather the same header be used across the board so as not to put too much of a burden on an app that must communicate with many - maybe like Ambari might have to? Finding out and keeping track of the header name for each component in every deployment may be a lot. The fact of the matter is that it really doesn't matter from a security perspective what the name is as long as it is what is configured for the filter enforcing the CSRF filter. We are really just ensuring that the request is coming from a client that has the ability to set a header. We just have to know what name to look for.
          Hide
          cnauroth Chris Nauroth added a comment -

          Here is patch v003. I updated this in response HADOOP-12758, where Larry McCay added support to the filter for checking the User-Agent. With consideration of Larry's recent comments here, I have kept the configurability of the header name.

          Show
          cnauroth Chris Nauroth added a comment - Here is patch v003. I updated this in response HADOOP-12758 , where Larry McCay added support to the filter for checking the User-Agent. With consideration of Larry's recent comments here, I have kept the configurability of the header name.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 6m 52s trunk passed
          +1 compile 6m 11s trunk passed with JDK v1.8.0_72
          +1 compile 6m 42s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 3s trunk passed
          +1 mvnsite 2m 28s trunk passed
          +1 mvneclipse 0m 41s trunk passed
          +1 findbugs 5m 2s trunk passed
          +1 javadoc 2m 18s trunk passed with JDK v1.8.0_72
          +1 javadoc 3m 10s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 16s Maven dependency ordering for patch
          +1 mvninstall 1m 57s the patch passed
          +1 compile 5m 55s the patch passed with JDK v1.8.0_72
          +1 javac 5m 55s the patch passed
          +1 compile 6m 51s the patch passed with JDK v1.7.0_95
          +1 javac 6m 51s the patch passed
          +1 checkstyle 1m 10s root: patch generated 0 new + 107 unchanged - 5 fixed = 107 total (was 112)
          +1 mvnsite 2m 53s the patch passed
          +1 mvneclipse 0m 47s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          +1 findbugs 6m 29s the patch passed
          +1 javadoc 2m 49s the patch passed with JDK v1.8.0_72
          +1 javadoc 3m 40s the patch passed with JDK v1.7.0_95
          +1 unit 8m 0s hadoop-common in the patch passed with JDK v1.8.0_72.
          +1 unit 0m 55s hadoop-hdfs-client in the patch passed with JDK v1.8.0_72.
          -1 unit 75m 26s hadoop-hdfs in the patch failed with JDK v1.8.0_72.
          +1 unit 7m 21s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 1m 3s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 0m 27s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 26s Patch does not generate ASF License warnings.
          163m 12s



          Reason Tests
          JDK v1.8.0_72 Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeMetrics
            hadoop.hdfs.server.datanode.TestBlockScanner
            hadoop.hdfs.TestRollingUpgrade



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12787136/HDFS-9711.003.patch
          JIRA Issue HDFS-9711
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux b04c639df0ed 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 60d2011
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14438/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14438/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14438/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14438/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Max memory used 77MB
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14438/console
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 6m 52s trunk passed +1 compile 6m 11s trunk passed with JDK v1.8.0_72 +1 compile 6m 42s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 3s trunk passed +1 mvnsite 2m 28s trunk passed +1 mvneclipse 0m 41s trunk passed +1 findbugs 5m 2s trunk passed +1 javadoc 2m 18s trunk passed with JDK v1.8.0_72 +1 javadoc 3m 10s trunk passed with JDK v1.7.0_95 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 1m 57s the patch passed +1 compile 5m 55s the patch passed with JDK v1.8.0_72 +1 javac 5m 55s the patch passed +1 compile 6m 51s the patch passed with JDK v1.7.0_95 +1 javac 6m 51s the patch passed +1 checkstyle 1m 10s root: patch generated 0 new + 107 unchanged - 5 fixed = 107 total (was 112) +1 mvnsite 2m 53s the patch passed +1 mvneclipse 0m 47s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 0s The patch has no ill-formed XML file. +1 findbugs 6m 29s the patch passed +1 javadoc 2m 49s the patch passed with JDK v1.8.0_72 +1 javadoc 3m 40s the patch passed with JDK v1.7.0_95 +1 unit 8m 0s hadoop-common in the patch passed with JDK v1.8.0_72. +1 unit 0m 55s hadoop-hdfs-client in the patch passed with JDK v1.8.0_72. -1 unit 75m 26s hadoop-hdfs in the patch failed with JDK v1.8.0_72. +1 unit 7m 21s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 1m 3s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 0m 27s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 26s Patch does not generate ASF License warnings. 163m 12s Reason Tests JDK v1.8.0_72 Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeMetrics   hadoop.hdfs.server.datanode.TestBlockScanner   hadoop.hdfs.TestRollingUpgrade Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12787136/HDFS-9711.003.patch JIRA Issue HDFS-9711 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux b04c639df0ed 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 60d2011 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/14438/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14438/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14438/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14438/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Max memory used 77MB Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14438/console Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          The last round of JDK 8 test failures are unrelated. I cannot repro.

          Show
          cnauroth Chris Nauroth added a comment - The last round of JDK 8 test failures are unrelated. I cannot repro.
          Hide
          lmccay Larry McCay added a comment -

          Hi Chris Nauroth - Looks great!
          The effort to add a filter for webhdfs is greater than I anticipated.

          a couple quick things:

          • I like the refactoring for an isRequestAllowed method on the filter - I actually meant to go back and do that earlier
          • I notice that you have to return your own error message in the channelRead0 method of RestCsrfPreventionFilterHandler. Perhaps, we should provide a constant for that in the filter too. As it stands now, the message you return is slightly different and a bit more ambiguous then what is returned by the filter itself (which is why I changed it).
          • I'd also like to understand why the typical filter processing isn't being used in this code path. Not because I think it should but I'd like to understand the usecase here.
          Show
          lmccay Larry McCay added a comment - Hi Chris Nauroth - Looks great! The effort to add a filter for webhdfs is greater than I anticipated. a couple quick things: I like the refactoring for an isRequestAllowed method on the filter - I actually meant to go back and do that earlier I notice that you have to return your own error message in the channelRead0 method of RestCsrfPreventionFilterHandler. Perhaps, we should provide a constant for that in the filter too. As it stands now, the message you return is slightly different and a bit more ambiguous then what is returned by the filter itself (which is why I changed it). I'd also like to understand why the typical filter processing isn't being used in this code path. Not because I think it should but I'd like to understand the usecase here.
          Hide
          cnauroth Chris Nauroth added a comment -

          Larry McCay, thank you for the notes.

          The reason that this code path is different from the typical filter processing is that we've made a decision to implement the DataNode side of WebHDFS in our own Netty-based HTTP server instead of a servlet container. Compared to the Jetty-based HttpServer2 class, the Netty-based implementation improves latency and stability for large block data transfer. Initial development of the Netty-based server was tracked in HDFS-7279 if you'd like to see more background.

          Because the DataNode side of WebHDFS does not use a servlet container, it takes some extra effort to integrate with the servlet filter. I explored a few options and eventually landed on refactoring logic into the public isRequestAllowed method as the simplest choice.

          Another alternative I explored was providing a minimal implementation of the servlet API classes that the DataNode could pass into the doFilter method. However, this quickly degenerated into writing a ton of code that we really don't need. Things like HttpServletRequest and HttpServletResponse are wide interfaces with a lot of methods. I would have needed to implement all methods (a lot of extra work) or stub them to throw exceptions (error-prone for future maintenance) even though the filter really only needs a small subset of those methods. This amount of effort isn't worth it just to integrate with one servlet filter. If we have future requirements for more servlet API integrations in the DataNode (i.e. requirements for plugging in a lot of filters), then I might resurrect this idea, but doing it now would be premature.

          I expect this situation is unique to WebHDFS. Other components in the ecosystem should have an easier time integrating, because they'll be using HttpServer2 or some other servlet container.

          I'll look into making some changes to the patch to address your concerns about the error message.

          Show
          cnauroth Chris Nauroth added a comment - Larry McCay , thank you for the notes. The reason that this code path is different from the typical filter processing is that we've made a decision to implement the DataNode side of WebHDFS in our own Netty-based HTTP server instead of a servlet container. Compared to the Jetty-based HttpServer2 class, the Netty-based implementation improves latency and stability for large block data transfer. Initial development of the Netty-based server was tracked in HDFS-7279 if you'd like to see more background. Because the DataNode side of WebHDFS does not use a servlet container, it takes some extra effort to integrate with the servlet filter. I explored a few options and eventually landed on refactoring logic into the public isRequestAllowed method as the simplest choice. Another alternative I explored was providing a minimal implementation of the servlet API classes that the DataNode could pass into the doFilter method. However, this quickly degenerated into writing a ton of code that we really don't need. Things like HttpServletRequest and HttpServletResponse are wide interfaces with a lot of methods. I would have needed to implement all methods (a lot of extra work) or stub them to throw exceptions (error-prone for future maintenance) even though the filter really only needs a small subset of those methods. This amount of effort isn't worth it just to integrate with one servlet filter. If we have future requirements for more servlet API integrations in the DataNode (i.e. requirements for plugging in a lot of filters), then I might resurrect this idea, but doing it now would be premature. I expect this situation is unique to WebHDFS. Other components in the ecosystem should have an easier time integrating, because they'll be using HttpServer2 or some other servlet container. I'll look into making some changes to the patch to address your concerns about the error message.
          Hide
          cnauroth Chris Nauroth added a comment -

          I'm attaching patch v004. The only changes since v003 are in RestCsrfPreventionFilter and RestCsrfPreventionFilterHandler.

          This is a different approach. I introduced an HttpInteraction interface that defines the bare minimum API required by the filter to execute the CSRF prevention logic. doFilter implements this interface as a passthrough to the servlet API. RestCsrfPreventionFilterHandler implements it in terms of the Netty API. Since HttpInteraction is a minimal interface, it avoids the problem of needing to implement all those servlet API methods that I talked about in my last comment.

          The benefit of this approach is that it improves encapsulation. The DataNode code no longer needs to know what headers the filter wants to read or what kind of error message it wants to send back. I expect it's less likely that changes in the filter logic would trigger corresponding changes in the DataNode code. The possible drawback is that the extra level of indirection perhaps makes the code a little less readable. Larry McCay, what is your opinion?

          Show
          cnauroth Chris Nauroth added a comment - I'm attaching patch v004. The only changes since v003 are in RestCsrfPreventionFilter and RestCsrfPreventionFilterHandler . This is a different approach. I introduced an HttpInteraction interface that defines the bare minimum API required by the filter to execute the CSRF prevention logic. doFilter implements this interface as a passthrough to the servlet API. RestCsrfPreventionFilterHandler implements it in terms of the Netty API. Since HttpInteraction is a minimal interface, it avoids the problem of needing to implement all those servlet API methods that I talked about in my last comment. The benefit of this approach is that it improves encapsulation. The DataNode code no longer needs to know what headers the filter wants to read or what kind of error message it wants to send back. I expect it's less likely that changes in the filter logic would trigger corresponding changes in the DataNode code. The possible drawback is that the extra level of indirection perhaps makes the code a little less readable. Larry McCay , what is your opinion?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 12s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 28s Maven dependency ordering for branch
          +1 mvninstall 6m 26s trunk passed
          +1 compile 5m 39s trunk passed with JDK v1.8.0_72
          +1 compile 6m 38s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 3s trunk passed
          +1 mvnsite 2m 27s trunk passed
          +1 mvneclipse 0m 40s trunk passed
          +1 findbugs 5m 3s trunk passed
          +1 javadoc 2m 15s trunk passed with JDK v1.8.0_72
          +1 javadoc 3m 9s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 1m 57s the patch passed
          +1 compile 5m 30s the patch passed with JDK v1.8.0_72
          +1 javac 5m 30s the patch passed
          +1 compile 6m 29s the patch passed with JDK v1.7.0_95
          +1 javac 6m 29s the patch passed
          +1 checkstyle 1m 4s root: patch generated 0 new + 108 unchanged - 5 fixed = 108 total (was 113)
          +1 mvnsite 2m 25s the patch passed
          +1 mvneclipse 0m 41s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 findbugs 5m 42s the patch passed
          +1 javadoc 2m 15s the patch passed with JDK v1.8.0_72
          +1 javadoc 3m 12s the patch passed with JDK v1.7.0_95
          +1 unit 6m 50s hadoop-common in the patch passed with JDK v1.8.0_72.
          +1 unit 0m 52s hadoop-hdfs-client in the patch passed with JDK v1.8.0_72.
          -1 unit 62m 18s hadoop-hdfs in the patch failed with JDK v1.8.0_72.
          +1 unit 7m 15s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 1m 3s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          +1 unit 53m 43s hadoop-hdfs in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 24s Patch does not generate ASF License warnings.
          197m 29s



          Reason Tests
          JDK v1.8.0_72 Failed junit tests hadoop.hdfs.TestFileAppend
            hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation
            hadoop.hdfs.TestCrcCorruption
            hadoop.hdfs.server.datanode.TestBlockScanner



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12787323/HDFS-9711.004.patch
          JIRA Issue HDFS-9711
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux 1238ca2a7ea5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / c3641ed
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14443/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14443/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14443/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Max memory used 77MB
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14443/console
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 12s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 28s Maven dependency ordering for branch +1 mvninstall 6m 26s trunk passed +1 compile 5m 39s trunk passed with JDK v1.8.0_72 +1 compile 6m 38s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 3s trunk passed +1 mvnsite 2m 27s trunk passed +1 mvneclipse 0m 40s trunk passed +1 findbugs 5m 3s trunk passed +1 javadoc 2m 15s trunk passed with JDK v1.8.0_72 +1 javadoc 3m 9s trunk passed with JDK v1.7.0_95 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 1m 57s the patch passed +1 compile 5m 30s the patch passed with JDK v1.8.0_72 +1 javac 5m 30s the patch passed +1 compile 6m 29s the patch passed with JDK v1.7.0_95 +1 javac 6m 29s the patch passed +1 checkstyle 1m 4s root: patch generated 0 new + 108 unchanged - 5 fixed = 108 total (was 113) +1 mvnsite 2m 25s the patch passed +1 mvneclipse 0m 41s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 5m 42s the patch passed +1 javadoc 2m 15s the patch passed with JDK v1.8.0_72 +1 javadoc 3m 12s the patch passed with JDK v1.7.0_95 +1 unit 6m 50s hadoop-common in the patch passed with JDK v1.8.0_72. +1 unit 0m 52s hadoop-hdfs-client in the patch passed with JDK v1.8.0_72. -1 unit 62m 18s hadoop-hdfs in the patch failed with JDK v1.8.0_72. +1 unit 7m 15s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 1m 3s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. +1 unit 53m 43s hadoop-hdfs in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 24s Patch does not generate ASF License warnings. 197m 29s Reason Tests JDK v1.8.0_72 Failed junit tests hadoop.hdfs.TestFileAppend   hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation   hadoop.hdfs.TestCrcCorruption   hadoop.hdfs.server.datanode.TestBlockScanner Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12787323/HDFS-9711.004.patch JIRA Issue HDFS-9711 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 1238ca2a7ea5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / c3641ed Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/14443/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14443/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14443/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Max memory used 77MB Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14443/console Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          The test failures from the v004 pre-commit run are unrelated.

          Show
          cnauroth Chris Nauroth added a comment - The test failures from the v004 pre-commit run are unrelated.
          Hide
          lmccay Larry McCay added a comment -

          Hey Chris Nauroth - I love the objectives of this and can see its elegance. I do also agree that it makes it hard to read. There is something especially odd about the indirection being completely built into the CSRF filter itself as well as the handler of the httpInteraction that makes you really wonder why is it done this way. I would find myself wanting to simplify it and not realize that some other component was using the handler method with another implementation of httpInteraction.

          You do however document the HttpInteraction pretty well in the same file. So, maybe it is fine.

          I am curious about the following line in the javadoc for the interface though:

          + * Typical usage of the filter inside a servlet container will not need to use
          + * this interface.

          The HttpInteraction is certainly being use from doFilter. Are you saying the there is no code other than the doFilter implementation that will need to use the HttpInteraction instance directly? That seems to make sense.

          Removing the anonymous extension may help make it more readable.

          Instead of:

          + handleHttpInteraction(new HttpInteraction() {
          + @Override
          + public String getHeader(String header)

          Unknown macro: { + return httpRequest.getHeader(header); + }

          +
          + @Override
          + public String getMethod()

          Unknown macro: { + return httpRequest.getMethod(); + }

          +
          + @Override
          + public void proceed() throws IOException, ServletException

          Unknown macro: { + chain.doFilter(httpRequest, httpResponse); + }

          +
          + @Override
          + public void sendError(int code, String message) throws IOException

          Unknown macro: { + httpResponse.sendError(code, message); + }

          + });

          We created a concrete instance of the ServletFilterHttpInteraction like:

          handleHttpInteraction(new ServletFilterHttpInteraction(request, response, chain));

          and:

          handleHttpInteraction(new NettyHttpInteraction(final ChannelHandlerContext ctx, final HttpRequest req));

          Do you think it would help?

          Show
          lmccay Larry McCay added a comment - Hey Chris Nauroth - I love the objectives of this and can see its elegance. I do also agree that it makes it hard to read. There is something especially odd about the indirection being completely built into the CSRF filter itself as well as the handler of the httpInteraction that makes you really wonder why is it done this way. I would find myself wanting to simplify it and not realize that some other component was using the handler method with another implementation of httpInteraction. You do however document the HttpInteraction pretty well in the same file. So, maybe it is fine. I am curious about the following line in the javadoc for the interface though: + * Typical usage of the filter inside a servlet container will not need to use + * this interface. The HttpInteraction is certainly being use from doFilter. Are you saying the there is no code other than the doFilter implementation that will need to use the HttpInteraction instance directly? That seems to make sense. Removing the anonymous extension may help make it more readable. Instead of: + handleHttpInteraction(new HttpInteraction() { + @Override + public String getHeader(String header) Unknown macro: { + return httpRequest.getHeader(header); + } + + @Override + public String getMethod() Unknown macro: { + return httpRequest.getMethod(); + } + + @Override + public void proceed() throws IOException, ServletException Unknown macro: { + chain.doFilter(httpRequest, httpResponse); + } + + @Override + public void sendError(int code, String message) throws IOException Unknown macro: { + httpResponse.sendError(code, message); + } + }); We created a concrete instance of the ServletFilterHttpInteraction like: handleHttpInteraction(new ServletFilterHttpInteraction(request, response, chain)); and: handleHttpInteraction(new NettyHttpInteraction(final ChannelHandlerContext ctx, final HttpRequest req)); Do you think it would help?
          Hide
          cnauroth Chris Nauroth added a comment -

          Hi Larry McCay. I hear you! I was unsure about v004 myself, so seeing these comments is helpful.

          I am curious about the following line in the javadoc for the interface though:

          I was trying to communicate that typical users of the filter (meaning people who run a servlet container) won't need to get involved with the HttpInteraction interface at all. It's true that the doFilter method uses it, but that's an internal implementation detail hidden from anyone who just configures the filter inside a servlet container.

          Removing the anonymous extension may help make it more readable.

          Yes, I see your point. At this point, I think we have 2 options:

          1. Abandon patch v004 as a failed experiment. Go back to v003, but make the response message constant accessible for reuse.
          2. Promote the anonymous inner classes to more clearly documented named classes.

          Let me know your thoughts, and then I'll post another revision. I have a slight preference for keeping the HttpInteraction, but I could be swayed.

          Show
          cnauroth Chris Nauroth added a comment - Hi Larry McCay . I hear you! I was unsure about v004 myself, so seeing these comments is helpful. I am curious about the following line in the javadoc for the interface though: I was trying to communicate that typical users of the filter (meaning people who run a servlet container) won't need to get involved with the HttpInteraction interface at all. It's true that the doFilter method uses it, but that's an internal implementation detail hidden from anyone who just configures the filter inside a servlet container. Removing the anonymous extension may help make it more readable. Yes, I see your point. At this point, I think we have 2 options: Abandon patch v004 as a failed experiment. Go back to v003, but make the response message constant accessible for reuse. Promote the anonymous inner classes to more clearly documented named classes. Let me know your thoughts, and then I'll post another revision. I have a slight preference for keeping the HttpInteraction , but I could be swayed.
          Hide
          lmccay Larry McCay added a comment -

          I am much more inclined to try and make v004 work than go back to v003.

          What do you think about going with option #2 and also pulling the handleHttpInteraction out into a CsrfUtils class.
          This makes it less odd that it is all encapsulated in the same impl and a little more clear that the handler is used by multiple classes.

          Perhaps CsrfUtils.handleRestHttpInteraction(HttpInteraction interation) with the anticipation that a Csrf.handleWebAppHttpInteraction(HttpInteraction interation)?

          The webapp one would have to be able to compare a session value of the header to the actual value sent by the client - which would be a new constructor argument on ServletFilterHttpInteraction/NettyHttpInteraction.

          We could also just overload the method with the additional parameter of the value to check against and leave it as handleHttpInteraction(HttpInteraction interation, String nonce)

          Anyway, I think that some simple separation with a Utils class would help make it more readable as well.

          Show
          lmccay Larry McCay added a comment - I am much more inclined to try and make v004 work than go back to v003. What do you think about going with option #2 and also pulling the handleHttpInteraction out into a CsrfUtils class. This makes it less odd that it is all encapsulated in the same impl and a little more clear that the handler is used by multiple classes. Perhaps CsrfUtils.handleRestHttpInteraction(HttpInteraction interation) with the anticipation that a Csrf.handleWebAppHttpInteraction(HttpInteraction interation)? The webapp one would have to be able to compare a session value of the header to the actual value sent by the client - which would be a new constructor argument on ServletFilterHttpInteraction/NettyHttpInteraction. We could also just overload the method with the additional parameter of the value to check against and leave it as handleHttpInteraction(HttpInteraction interation, String nonce) Anyway, I think that some simple separation with a Utils class would help make it more readable as well.
          Hide
          cnauroth Chris Nauroth added a comment -

          Larry McCay, thanks again. I think we're basically on the same page.

          What do you think about going with option #2 and also pulling the handleHttpInteraction out into a CsrfUtils class.

          The implementation of handleHttpInteraction needs access to the filter initialization parameters like the header and the methods to ignore. I think we'd have to refactor all of the data members of the filter into the proposed CsrfUtils class. If the class is actually holding state like that, then something like CsrfContext would likely be a better name. RestCsrfPreventionFilter would then be a very thin shim calling into CsrfContext. The DataNode wouldn't use the filter class at all. Instead, it would work with CsrfContext. As a side benefit, the DataNode would no longer need to stub an implementation of FilterConfig.

          Do you think it makes sense to go this far with the refactoring? If so, I can put together a new patch revision.

          Show
          cnauroth Chris Nauroth added a comment - Larry McCay , thanks again. I think we're basically on the same page. What do you think about going with option #2 and also pulling the handleHttpInteraction out into a CsrfUtils class. The implementation of handleHttpInteraction needs access to the filter initialization parameters like the header and the methods to ignore. I think we'd have to refactor all of the data members of the filter into the proposed CsrfUtils class. If the class is actually holding state like that, then something like CsrfContext would likely be a better name. RestCsrfPreventionFilter would then be a very thin shim calling into CsrfContext . The DataNode wouldn't use the filter class at all. Instead, it would work with CsrfContext . As a side benefit, the DataNode would no longer need to stub an implementation of FilterConfig . Do you think it makes sense to go this far with the refactoring? If so, I can put together a new patch revision.
          Hide
          lmccay Larry McCay added a comment -

          No, I don't think that is necessary to go that far.

          +1 for removing the anonymous inner classes.

          On Fri, Feb 12, 2016 at 1:05 PM, Chris Nauroth (JIRA) <jira@apache.org>

          Show
          lmccay Larry McCay added a comment - No, I don't think that is necessary to go that far. +1 for removing the anonymous inner classes. On Fri, Feb 12, 2016 at 1:05 PM, Chris Nauroth (JIRA) <jira@apache.org>
          Hide
          cnauroth Chris Nauroth added a comment -

          Here is patch v005. The only changes since v004 are in RestCsrfPreventionFilter and RestCsrfPreventionFilterHandler. I refactored into named classes, and I also tried to clarify the phrasing of the comment on the HttpInteraction interface.

          Show
          cnauroth Chris Nauroth added a comment - Here is patch v005. The only changes since v004 are in RestCsrfPreventionFilter and RestCsrfPreventionFilterHandler . I refactored into named classes, and I also tried to clarify the phrasing of the comment on the HttpInteraction interface.
          Hide
          lmccay Larry McCay added a comment -

          I don't know why but I think it is much better now.
          May just be that I am not drawn to try and understand what is in the anonymous class or what.

          +1

          Show
          lmccay Larry McCay added a comment - I don't know why but I think it is much better now. May just be that I am not drawn to try and understand what is in the anonymous class or what. +1
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 13m 41s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 40s Maven dependency ordering for branch
          +1 mvninstall 9m 40s trunk passed
          +1 compile 11m 13s trunk passed with JDK v1.8.0_72
          +1 compile 9m 50s trunk passed with JDK v1.7.0_95
          +1 checkstyle 1m 30s trunk passed
          +1 mvnsite 3m 26s trunk passed
          +1 mvneclipse 0m 55s trunk passed
          +1 findbugs 6m 53s trunk passed
          +1 javadoc 3m 28s trunk passed with JDK v1.8.0_72
          +1 javadoc 4m 36s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 19s Maven dependency ordering for patch
          +1 mvninstall 2m 44s the patch passed
          +1 compile 11m 6s the patch passed with JDK v1.8.0_72
          +1 javac 11m 6s the patch passed
          +1 compile 9m 54s the patch passed with JDK v1.7.0_95
          +1 javac 9m 53s the patch passed
          -1 checkstyle 1m 27s root: patch generated 1 new + 107 unchanged - 5 fixed = 108 total (was 112)
          +1 mvnsite 3m 25s the patch passed
          +1 mvneclipse 0m 56s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          +1 findbugs 7m 46s the patch passed
          +1 javadoc 3m 29s the patch passed with JDK v1.8.0_72
          +1 javadoc 4m 40s the patch passed with JDK v1.7.0_95
          +1 unit 11m 2s hadoop-common in the patch passed with JDK v1.8.0_72.
          +1 unit 1m 23s hadoop-hdfs-client in the patch passed with JDK v1.8.0_72.
          -1 unit 86m 22s hadoop-hdfs in the patch failed with JDK v1.8.0_72.
          +1 unit 10m 18s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 1m 22s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 78m 15s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 34s Patch does not generate ASF License warnings.
          303m 7s



          Reason Tests
          JDK v1.8.0_72 Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeMetrics
            hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode
            hadoop.hdfs.TestFileAppend
            hadoop.hdfs.server.datanode.TestBlockScanner
            hadoop.hdfs.server.datanode.TestDirectoryScanner
            hadoop.hdfs.security.TestDelegationTokenForProxyUser
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestEncryptionZonesWithKMS
            hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs
            hadoop.hdfs.TestDistributedFileSystem
            hadoop.hdfs.server.datanode.TestBlockScanner
            hadoop.hdfs.server.datanode.TestDirectoryScanner
            hadoop.hdfs.TestEncryptionZones



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12787707/HDFS-9711.005.patch
          JIRA Issue HDFS-9711
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux 825da9486778 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9fdfb54
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14482/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14482/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14482/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14482/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14482/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14482/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14482/console
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 13m 41s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 40s Maven dependency ordering for branch +1 mvninstall 9m 40s trunk passed +1 compile 11m 13s trunk passed with JDK v1.8.0_72 +1 compile 9m 50s trunk passed with JDK v1.7.0_95 +1 checkstyle 1m 30s trunk passed +1 mvnsite 3m 26s trunk passed +1 mvneclipse 0m 55s trunk passed +1 findbugs 6m 53s trunk passed +1 javadoc 3m 28s trunk passed with JDK v1.8.0_72 +1 javadoc 4m 36s trunk passed with JDK v1.7.0_95 0 mvndep 0m 19s Maven dependency ordering for patch +1 mvninstall 2m 44s the patch passed +1 compile 11m 6s the patch passed with JDK v1.8.0_72 +1 javac 11m 6s the patch passed +1 compile 9m 54s the patch passed with JDK v1.7.0_95 +1 javac 9m 53s the patch passed -1 checkstyle 1m 27s root: patch generated 1 new + 107 unchanged - 5 fixed = 108 total (was 112) +1 mvnsite 3m 25s the patch passed +1 mvneclipse 0m 56s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 xml 0m 0s The patch has no ill-formed XML file. +1 findbugs 7m 46s the patch passed +1 javadoc 3m 29s the patch passed with JDK v1.8.0_72 +1 javadoc 4m 40s the patch passed with JDK v1.7.0_95 +1 unit 11m 2s hadoop-common in the patch passed with JDK v1.8.0_72. +1 unit 1m 23s hadoop-hdfs-client in the patch passed with JDK v1.8.0_72. -1 unit 86m 22s hadoop-hdfs in the patch failed with JDK v1.8.0_72. +1 unit 10m 18s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 1m 22s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 78m 15s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 34s Patch does not generate ASF License warnings. 303m 7s Reason Tests JDK v1.8.0_72 Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeMetrics   hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode   hadoop.hdfs.TestFileAppend   hadoop.hdfs.server.datanode.TestBlockScanner   hadoop.hdfs.server.datanode.TestDirectoryScanner   hadoop.hdfs.security.TestDelegationTokenForProxyUser JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestEncryptionZonesWithKMS   hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs   hadoop.hdfs.TestDistributedFileSystem   hadoop.hdfs.server.datanode.TestBlockScanner   hadoop.hdfs.server.datanode.TestDirectoryScanner   hadoop.hdfs.TestEncryptionZones Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12787707/HDFS-9711.005.patch JIRA Issue HDFS-9711 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 825da9486778 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9fdfb54 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14482/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14482/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14482/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14482/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14482/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14482/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14482/console Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          I can see that TestRollingFileSystemSinkWithSecureHdfs fails when run on JDK 7, but not when run on JDK 8. This is unrelated to the patch. This test was introduced in HDFS-9780, so I've asked the contributors on that one to take a look.

          Aside from that, I cannot reproduce the other test failures.

          The Checkstyle warning is about a missing package-info.java file for the org.apache.hadoop.hdfs.server.datanode.web package. It's not worthwhile adding this for a package that isn't part of the public API.

          Show
          cnauroth Chris Nauroth added a comment - I can see that TestRollingFileSystemSinkWithSecureHdfs fails when run on JDK 7, but not when run on JDK 8. This is unrelated to the patch. This test was introduced in HDFS-9780 , so I've asked the contributors on that one to take a look. Aside from that, I cannot reproduce the other test failures. The Checkstyle warning is about a missing package-info.java file for the org.apache.hadoop.hdfs.server.datanode.web package. It's not worthwhile adding this for a package that isn't part of the public API.
          Hide
          jingzhao Jing Zhao added a comment -

          Thanks for working on this, Chris Nauroth! The patch looks good to me. +1

          Show
          jingzhao Jing Zhao added a comment - Thanks for working on this, Chris Nauroth ! The patch looks good to me. +1
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Chris Nauroth for working on this. The patch looks good to me +1.
          One NIT: Can we move WebHdfsFileSystem#getTrimmedStringList() with default string support to StringUtils so that it can be used by configure keys similar to this?

          Show
          xyao Xiaoyu Yao added a comment - Thanks Chris Nauroth for working on this. The patch looks good to me +1. One NIT: Can we move WebHdfsFileSystem#getTrimmedStringList() with default string support to StringUtils so that it can be used by configure keys similar to this?
          Hide
          cnauroth Chris Nauroth added a comment -

          Can we move WebHdfsFileSystem#getTrimmedStringList() with default string support to StringUtils so that it can be used by configure keys similar to this?

          I considered this a while ago, but then I decided against it. It would be cumbersome to put this method into StringUtils, because it would create a circular dependency between Configuration and StringUtils.

          We could potentially make it a member of Configuration, but then the problem is that there are similar pre-existing methods, but with slightly different semantics, accepting a default list as varargs instead of a default string configuration value:

            public String[] getStrings(String name, String... defaultValue) {
          
            public String[] getTrimmedStrings(String name, String... defaultValue) {
          

          Adding Configuration#getTrimmedStringList(String, String) would become part of the public API of Configuration, and the lack of symmetry with existing methods could be a source of confusion. We could revisit this later though if it becomes a common pattern.

          I'd like to commit this later today based on the multiple +1s.

          Show
          cnauroth Chris Nauroth added a comment - Can we move WebHdfsFileSystem#getTrimmedStringList() with default string support to StringUtils so that it can be used by configure keys similar to this? I considered this a while ago, but then I decided against it. It would be cumbersome to put this method into StringUtils , because it would create a circular dependency between Configuration and StringUtils . We could potentially make it a member of Configuration , but then the problem is that there are similar pre-existing methods, but with slightly different semantics, accepting a default list as varargs instead of a default string configuration value: public String [] getStrings( String name, String ... defaultValue) { public String [] getTrimmedStrings( String name, String ... defaultValue) { Adding Configuration#getTrimmedStringList(String, String) would become part of the public API of Configuration , and the lack of symmetry with existing methods could be a source of confusion. We could revisit this later though if it becomes a common pattern. I'd like to commit this later today based on the multiple +1s.
          Hide
          xyao Xiaoyu Yao added a comment -

          LGTM, Thanks Chris Nauroth for the clarification!

          Show
          xyao Xiaoyu Yao added a comment - LGTM, Thanks Chris Nauroth for the clarification!
          Hide
          cnauroth Chris Nauroth added a comment -

          There is a small merge conflict for branch-2, so I'm attaching another patch file to get a pre-commit run specifically for branch-2.

          Show
          cnauroth Chris Nauroth added a comment - There is a small merge conflict for branch-2, so I'm attaching another patch file to get a pre-commit run specifically for branch-2.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 13s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 2m 19s Maven dependency ordering for branch
          +1 mvninstall 7m 23s branch-2 passed
          +1 compile 6m 14s branch-2 passed with JDK v1.8.0_72
          +1 compile 6m 55s branch-2 passed with JDK v1.7.0_95
          +1 checkstyle 1m 12s branch-2 passed
          +1 mvnsite 2m 25s branch-2 passed
          +1 mvneclipse 0m 46s branch-2 passed
          -1 findbugs 2m 3s hadoop-common-project/hadoop-common in branch-2 has 5 extant Findbugs warnings.
          -1 findbugs 1m 50s hadoop-hdfs-project/hadoop-hdfs-client in branch-2 has 5 extant Findbugs warnings.
          +1 javadoc 2m 37s branch-2 passed with JDK v1.8.0_72
          +1 javadoc 3m 31s branch-2 passed with JDK v1.7.0_95
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 2m 10s the patch passed
          +1 compile 6m 23s the patch passed with JDK v1.8.0_72
          +1 javac 6m 23s the patch passed
          +1 compile 7m 0s the patch passed with JDK v1.7.0_95
          +1 javac 7m 0s the patch passed
          -1 checkstyle 1m 10s root: patch generated 1 new + 103 unchanged - 5 fixed = 104 total (was 108)
          +1 mvnsite 2m 25s the patch passed
          +1 mvneclipse 0m 40s the patch passed
          -1 whitespace 0m 0s The patch has 61 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 xml 0m 0s The patch has no ill-formed XML file.
          +1 findbugs 6m 39s the patch passed
          +1 javadoc 2m 36s the patch passed with JDK v1.8.0_72
          +1 javadoc 3m 36s the patch passed with JDK v1.7.0_95
          -1 unit 8m 31s hadoop-common in the patch failed with JDK v1.8.0_72.
          +1 unit 1m 0s hadoop-hdfs-client in the patch passed with JDK v1.8.0_72.
          -1 unit 53m 48s hadoop-hdfs in the patch failed with JDK v1.8.0_72.
          +1 unit 8m 19s hadoop-common in the patch passed with JDK v1.7.0_95.
          +1 unit 1m 3s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 49m 0s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 27s Patch does not generate ASF License warnings.
          196m 24s



          Reason Tests
          JDK v1.8.0_72 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager
            hadoop.hdfs.server.blockmanagement.TestBlockManager
          JDK v1.7.0_95 Failed junit tests hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs
            hadoop.hdfs.server.namenode.TestCacheDirectives



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:babe025
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12788339/HDFS-9711-branch-2.005.patch
          JIRA Issue HDFS-9711
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux d98bc50c4e94 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision branch-2 / ad4fcd1
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14519/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14519/console
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 2m 19s Maven dependency ordering for branch +1 mvninstall 7m 23s branch-2 passed +1 compile 6m 14s branch-2 passed with JDK v1.8.0_72 +1 compile 6m 55s branch-2 passed with JDK v1.7.0_95 +1 checkstyle 1m 12s branch-2 passed +1 mvnsite 2m 25s branch-2 passed +1 mvneclipse 0m 46s branch-2 passed -1 findbugs 2m 3s hadoop-common-project/hadoop-common in branch-2 has 5 extant Findbugs warnings. -1 findbugs 1m 50s hadoop-hdfs-project/hadoop-hdfs-client in branch-2 has 5 extant Findbugs warnings. +1 javadoc 2m 37s branch-2 passed with JDK v1.8.0_72 +1 javadoc 3m 31s branch-2 passed with JDK v1.7.0_95 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 2m 10s the patch passed +1 compile 6m 23s the patch passed with JDK v1.8.0_72 +1 javac 6m 23s the patch passed +1 compile 7m 0s the patch passed with JDK v1.7.0_95 +1 javac 7m 0s the patch passed -1 checkstyle 1m 10s root: patch generated 1 new + 103 unchanged - 5 fixed = 104 total (was 108) +1 mvnsite 2m 25s the patch passed +1 mvneclipse 0m 40s the patch passed -1 whitespace 0m 0s The patch has 61 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 xml 0m 0s The patch has no ill-formed XML file. +1 findbugs 6m 39s the patch passed +1 javadoc 2m 36s the patch passed with JDK v1.8.0_72 +1 javadoc 3m 36s the patch passed with JDK v1.7.0_95 -1 unit 8m 31s hadoop-common in the patch failed with JDK v1.8.0_72. +1 unit 1m 0s hadoop-hdfs-client in the patch passed with JDK v1.8.0_72. -1 unit 53m 48s hadoop-hdfs in the patch failed with JDK v1.8.0_72. +1 unit 8m 19s hadoop-common in the patch passed with JDK v1.7.0_95. +1 unit 1m 3s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 49m 0s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 27s Patch does not generate ASF License warnings. 196m 24s Reason Tests JDK v1.8.0_72 Failed junit tests hadoop.security.ssl.TestReloadingX509TrustManager   hadoop.hdfs.server.blockmanagement.TestBlockManager JDK v1.7.0_95 Failed junit tests hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs   hadoop.hdfs.server.namenode.TestCacheDirectives Subsystem Report/Notes Docker Image:yetus/hadoop:babe025 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12788339/HDFS-9711-branch-2.005.patch JIRA Issue HDFS-9711 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux d98bc50c4e94 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / ad4fcd1 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_72 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_72.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_72.txt https://builds.apache.org/job/PreCommit-HDFS-Build/14519/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14519/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14519/console Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          Checkstyle flagged a line that was too long for the branch-2 patch. I'll fix that on commit. All of the other warnings were noise.

          Show
          cnauroth Chris Nauroth added a comment - Checkstyle flagged a line that was too long for the branch-2 patch. I'll fix that on commit. All of the other warnings were noise.
          Hide
          cnauroth Chris Nauroth added a comment -

          I have committed this to trunk, branch-2 and branch-2.8. Larry, thanks very much for your guidance on this. Thanks also to Haohui, Anu, Jing and Xiaoyu for code reviews.

          Show
          cnauroth Chris Nauroth added a comment - I have committed this to trunk, branch-2 and branch-2.8. Larry, thanks very much for your guidance on this. Thanks also to Haohui, Anu, Jing and Xiaoyu for code reviews.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9322 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9322/)
          HDFS-9711. Integrate CSRF prevention filter in WebHDFS. Contributed by (cnauroth: rev 5d1889a66d91608d34ca9411fb6e9161e637e9d3)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/PortUnificationServerHandler.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/rest-csrf.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/RestCsrfPreventionFilterHandler.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/RestCsrfPreventionFilter.java
          • hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsWithRestCsrfPreventionFilter.java
          • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9322 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9322/ ) HDFS-9711 . Integrate CSRF prevention filter in WebHDFS. Contributed by (cnauroth: rev 5d1889a66d91608d34ca9411fb6e9161e637e9d3) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/PortUnificationServerHandler.java hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/rest-csrf.js hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/RestCsrfPreventionFilterHandler.java hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/RestCsrfPreventionFilter.java hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsWithRestCsrfPreventionFilter.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java

            People

            • Assignee:
              cnauroth Chris Nauroth
              Reporter:
              cnauroth Chris Nauroth
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development