Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-481

Bug Fixes + HdfsProxy to use proxy user to impresonate the real user

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.21.0
    • Fix Version/s: 0.21.0
    • Component/s: contrib/hdfsproxy
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Bugs:

      1. hadoop-version is not recognized if run ant command from src/contrib/ or from src/contrib/hdfsproxy
      If running ant command from $HADOOP_HDFS_HOME, hadoop-version will be passed to contrib's build through subant. But if running from src/contrib or src/contrib/hdfsproxy, the hadoop-version will not be recognized.

      2. LdapIpDirFilter.java is not thread safe. userName, Group & Paths are per request and can't be class members.

      3. Addressed the following StackOverflowError.
      ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localh
      ost].[/].[proxyForward]] Servlet.service() for servlet proxyForward threw exception
      java.lang.StackOverflowError
      at org.apache.catalina.core.ApplicationHttpRequest.getAttribute(ApplicationHttpR
      equest.java:229)
      This is due to when the target war (/target.war) does not exist, the forwarding war will forward to its parent context path /, which defines the forwarding war itself. This cause infinite loop. Added "HDFS Proxy Forward".equals(dstContext.getServletContextName() in the if logic to break the loop.

      4. Kerberos credentials of remote user aren't available. HdfsProxy needs to act on behalf of the real user to service the requests

      1. HDFS-481.patch
        1 kB
        zhiyong zhang
      2. HDFS-481.patch
        2 kB
        zhiyong zhang
      3. HDFS-481.patch
        5 kB
        zhiyong zhang
      4. HDFS-481.patch
        12 kB
        zhiyong zhang
      5. HDFS-481.patch
        49 kB
        Srikanth Sundarrajan
      6. HDFS-481-bp-y20.patch
        114 kB
        Srikanth Sundarrajan
      7. HDFS-481-bp-y20s.patch
        130 kB
        Srikanth Sundarrajan
      8. HDFS-481.out
        1.02 MB
        Srikanth Sundarrajan
      9. HDFS-481.patch
        41 kB
        Srikanth Sundarrajan
      10. HDFS-481.patch
        37 kB
        Srikanth Sundarrajan
      11. HDFS-481.patch
        36 kB
        Srikanth Sundarrajan
      12. HDFS-481-bp-y20.patch
        98 kB
        Srikanth Sundarrajan
      13. HDFS-481-bp-y20s.patch
        113 kB
        Srikanth Sundarrajan
      14. HDFS-481-NEW.patch
        110 kB
        Srikanth Sundarrajan
      15. HDFS-481-bp-y20s.patch
        4 kB
        Srikanth Sundarrajan

        Issue Links

          Activity

          Hide
          zhiyong zhang added a comment -

          1) included hadoop-version in src/contrib/hdfsproxy/build.xml file
          2) changed conf to sslConf in HsftpFileSystem.java

          Show
          zhiyong zhang added a comment - 1) included hadoop-version in src/contrib/hdfsproxy/build.xml file 2) changed conf to sslConf in HsftpFileSystem.java
          Hide
          Chris Douglas added a comment -

          Won't this override the hadoop-version property defined by the main build script? Compilation will break when the core jar is updated. This is a problem for all the contrib modules and I'd rather avoid ad-hoc fixes...

          Show
          Chris Douglas added a comment - Won't this override the hadoop-version property defined by the main build script? Compilation will break when the core jar is updated. This is a problem for all the contrib modules and I'd rather avoid ad-hoc fixes...
          Hide
          zhiyong zhang added a comment -

          Yes, I agree. how about <import file="$

          {hadoop.root}

          /build.xml"/> in src/contrib/hdfsproxy/build.xml? It seems pretty clumsy though. I think it would be better to define hadoop-version in a build.properties instead of in build.xml in the hdfs trunk. That way every contrib can import this property file without importing too much. I don't understand why it has <property file="$

          {basedir}

          /build.properties" /> line in the $HADOOP_HDFS_HOME/build.xml file while there is no such file in the directory.

          Show
          zhiyong zhang added a comment - Yes, I agree. how about <import file="$ {hadoop.root} /build.xml"/> in src/contrib/hdfsproxy/build.xml? It seems pretty clumsy though. I think it would be better to define hadoop-version in a build.properties instead of in build.xml in the hdfs trunk. That way every contrib can import this property file without importing too much. I don't understand why it has <property file="$ {basedir} /build.properties" /> line in the $HADOOP_HDFS_HOME/build.xml file while there is no such file in the directory.
          Hide
          Chris Douglas added a comment -

          The only reason the version is required is for including hadoop jars, right? If the include spec takes wildcards, that might be an acceptable workaround until we get a better packaging story for intra-project dependencies

          Show
          Chris Douglas added a comment - The only reason the version is required is for including hadoop jars, right? If the include spec takes wildcards, that might be an acceptable workaround until we get a better packaging story for intra-project dependencies
          Hide
          zhiyong zhang added a comment -

          That's what I did in the beginning. Someone changed it in the project split process to make it look prettier I guess.
          I will change it back right now.

          Show
          zhiyong zhang added a comment - That's what I did in the beginning. Someone changed it in the project split process to make it look prettier I guess. I will change it back right now.
          Hide
          zhiyong zhang added a comment -

          done.

          Show
          zhiyong zhang added a comment - done.
          Hide
          zhiyong zhang added a comment -

          Used synchronized block to address some race conditions for LdapIpDirFilter.java.

          Show
          zhiyong zhang added a comment - Used synchronized block to address some race conditions for LdapIpDirFilter.java.
          Hide
          zhiyong zhang added a comment -

          fixed the bugs described above.

          Show
          zhiyong zhang added a comment - fixed the bugs described above.
          Hide
          Chris Douglas added a comment -

          Thanks for fixing the lib packaging. There was just one pair of changes that I wanted to ask after:

          -    <display-name>HDFS Proxy</display-name>
          +    <display-name>HDFS Proxy Forward</display-name>
          
          -    if (dstContext == null) {
          -      LOG.info("Context non-exist or restricted from access: " + version);
          +    // avoid infinite forwarding.
          +    if (dstContext == null
          +        || "HDFS Proxy Forward".equals(dstContext.getServletContextName())) {
          +      LOG.error("Context (" + version
          +          + ".war) non-exist or restricted from access");
          

          This is to prevent the forwarding servlet from passing requests to itself? When does this occur? Is there another way to detect/prevent it, other than (what looks like) taking a configurable string and hard-coding a check for it?

          The rest of the changes look reasonable.

          Show
          Chris Douglas added a comment - Thanks for fixing the lib packaging. There was just one pair of changes that I wanted to ask after: - <display-name>HDFS Proxy</display-name> + <display-name>HDFS Proxy Forward</display-name> - if (dstContext == null) { - LOG.info("Context non-exist or restricted from access: " + version); + // avoid infinite forwarding. + if (dstContext == null + || "HDFS Proxy Forward".equals(dstContext.getServletContextName())) { + LOG.error("Context (" + version + + ".war) non-exist or restricted from access"); This is to prevent the forwarding servlet from passing requests to itself? When does this occur? Is there another way to detect/prevent it, other than (what looks like) taking a configurable string and hard-coding a check for it? The rest of the changes look reasonable.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12414531/HDFS-481.patch
          against trunk revision 806746.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/84/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/84/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/84/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/84/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12414531/HDFS-481.patch against trunk revision 806746. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/84/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/84/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/84/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/84/console This message is automatically generated.
          Hide
          zhiyong zhang added a comment -

          Hi Chris, thanks for the comment.
          right, that piece of code is to prevent the forwarding servlet from passing requests to itself. That happens if the forwarding servlet could not find the matching servlet to forward request to.

          For instance, if the user request path is /hadoop20 and hadoop20.war does not exist under the same webapps/ folder as the ROOT.war, what the forwarding servlet will do (through ServletContext.getContext()) in this case is to match the longest path. Since it could not find /hadoop20, it will match against its parent path /, which matches to ROOT.war, the forwarding servlet itself, then it causes an infinite loop, finally causing java.lang.StackOverflowError to be thrown.

          I've thought of using logic curContext.getServletContextName().equals(dstContext.getServletContextName()) to tell, but it will break the unit test. Since cactus unit test framework won't be able to do cross context forwarding at this stage yet. All forwarding occurs in the same context. In that case, ServletContext.getContext() would always return the same context. the unit test would stuck there.

          I couldn't think any other ways to work around this at this stage. Do you have any better ideas?

          Thanks.

          Show
          zhiyong zhang added a comment - Hi Chris, thanks for the comment. right, that piece of code is to prevent the forwarding servlet from passing requests to itself. That happens if the forwarding servlet could not find the matching servlet to forward request to. For instance, if the user request path is /hadoop20 and hadoop20.war does not exist under the same webapps/ folder as the ROOT.war, what the forwarding servlet will do (through ServletContext.getContext()) in this case is to match the longest path. Since it could not find /hadoop20, it will match against its parent path /, which matches to ROOT.war, the forwarding servlet itself, then it causes an infinite loop, finally causing java.lang.StackOverflowError to be thrown. I've thought of using logic curContext.getServletContextName().equals(dstContext.getServletContextName()) to tell, but it will break the unit test. Since cactus unit test framework won't be able to do cross context forwarding at this stage yet. All forwarding occurs in the same context. In that case, ServletContext.getContext() would always return the same context. the unit test would stuck there. I couldn't think any other ways to work around this at this stage. Do you have any better ideas? Thanks.
          Hide
          Chris Douglas added a comment -

          I couldn't think any other ways to work around this at this stage. Do you have any better ideas?

          I'm not fluent in the servlet/Tomcat APIs, but hard-coding the (configurable) name of a component to avoid unbounded recursion seems ad hoc. Is it difficult to restrict the set of valid targets to exclude the current servlet? If nothing else, looking up (rather than hard-coding) the servlet name strikes me as a minimum requirement. This can't possibly be an issue unique to hdfsproxy; is there a canonical approach that doesn't work with its requirements?

          Show
          Chris Douglas added a comment - I couldn't think any other ways to work around this at this stage. Do you have any better ideas? I'm not fluent in the servlet/Tomcat APIs, but hard-coding the (configurable) name of a component to avoid unbounded recursion seems ad hoc. Is it difficult to restrict the set of valid targets to exclude the current servlet? If nothing else, looking up (rather than hard-coding) the servlet name strikes me as a minimum requirement. This can't possibly be an issue unique to hdfsproxy; is there a canonical approach that doesn't work with its requirements?
          Hide
          Srikanth Sundarrajan added a comment -

          HDFS-481 patch includes the following fixes

          • LdapIpDirFilter broken into LdapIpDirFilter for authentication and AuthorizationFilter for authorization.
          • KerberosAuhoriztionFilter extends AuthorizationFilter - to be used again kerberos based hadoop secure version
          • Infinite redirection addressed by checking the servlet context object instead of the names
          • Acting on behalf of the requesting user using UGI.createProxyUser (hdfsproxy runs as a trusted super user)
          Show
          Srikanth Sundarrajan added a comment - HDFS-481 patch includes the following fixes LdapIpDirFilter broken into LdapIpDirFilter for authentication and AuthorizationFilter for authorization. KerberosAuhoriztionFilter extends AuthorizationFilter - to be used again kerberos based hadoop secure version Infinite redirection addressed by checking the servlet context object instead of the names Acting on behalf of the requesting user using UGI.createProxyUser (hdfsproxy runs as a trusted super user)
          Hide
          Srikanth Sundarrajan added a comment -

          HDFS-481-bp-y20.patch and HDFS-481-bp-y20s.patch are backport patches. Not for commit.

          Show
          Srikanth Sundarrajan added a comment - HDFS-481 -bp-y20.patch and HDFS-481 -bp-y20s.patch are backport patches. Not for commit.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          @Srikanth

          • Is you patch still fixing the bugs stated in the description?
          • Could you revert the white space changes like the following? Otherwise, it is hard to review your patch.
            -<property>
            -    <name>fs.default.name</name>
            -    <!-- cluster variant -->
            -    <value>hdfs://localhost:54321</value>
            -    <description>The name of the default file system.  Either the
            -  literal string "local" or a host:port for NDFS.</description>
            -    <final>true</final>
            -  </property>
            +    <property>
            +        <name>fs.default.name</name>
            +        <!-- cluster variant -->
            +        <value>hdfs://localhost:54321</value>
            +        <description>The name of the default file system.  Either the
            +            literal string "local" or a host:port for NDFS.</description>
            +        <final>true</final>
            +    </property>
            
          Show
          Tsz Wo Nicholas Sze added a comment - @Srikanth Is you patch still fixing the bugs stated in the description? Could you revert the white space changes like the following? Otherwise, it is hard to review your patch. -<property> - <name>fs. default .name</name> - <!-- cluster variant --> - <value>hdfs: //localhost:54321</value> - <description>The name of the default file system. Either the - literal string "local" or a host:port for NDFS.</description> - < final > true </ final > - </property> + <property> + <name>fs. default .name</name> + <!-- cluster variant --> + <value>hdfs: //localhost:54321</value> + <description>The name of the default file system. Either the + literal string "local" or a host:port for NDFS.</description> + < final > true </ final > + </property>
          Hide
          Tsz Wo Nicholas Sze added a comment -

          BTW, it seems that hdfsproxy cannot be built. I tried to run TestLdapIpDirFilter but it failed by

          /home/tsz/hadoop/hdfs/h1/src/contrib/hdfsproxy/build.xml:292:
           org.codehaus.cargo.container.ContainerException: Failed to download
           [http://apache.osuosl.org/tomcat/tomcat-6/v6.0.18/bin/apache-tomcat-6.0.18.zip]
          
          Show
          Tsz Wo Nicholas Sze added a comment - BTW, it seems that hdfsproxy cannot be built. I tried to run TestLdapIpDirFilter but it failed by /home/tsz/hadoop/hdfs/h1/src/contrib/hdfsproxy/build.xml:292: org.codehaus.cargo.container.ContainerException: Failed to download [http://apache.osuosl.org/tomcat/tomcat-6/v6.0.18/bin/apache-tomcat-6.0.18.zip]
          Hide
          Jakob Homan added a comment -

          BTW, it seems that hdfsproxy cannot be built. I tried to run TestLdapIpDirFilter but it failed by

          This is a known issue: HDFS-1046

          Show
          Jakob Homan added a comment - BTW, it seems that hdfsproxy cannot be built. I tried to run TestLdapIpDirFilter but it failed by This is a known issue: HDFS-1046
          Hide
          Srikanth Sundarrajan added a comment -

          Patch already includes changes to build.xml for pulling newer tomcat version (to run LdapIpDirFilter tests)

           
          
          @@ -299,7 +301,7 @@
                <containerset>
                  <cargo containerId="${tomcat.container.id}" timeout="30000" output="${logs.dir}/output.log" log="${logs.dir}/cargo.log">
                   <zipUrlInstaller
          -            installUrl="http://apache.osuosl.org/tomcat/tomcat-6/v6.0.18/bin/apache-tomcat-6.0.18.zip"
          +            installUrl="http://apache.osuosl.org/tomcat/tomcat-6/v6.0.24/bin/apache-tomcat-6.0.24.zip"
                       installDir="${target.dir}/${tomcat.container.id}"/>
                    <configuration type="existing" home="${tomcatconfig.dir}">
                      <property name="cargo.servlet.port" value="${cargo.servlet.http.port}"/>
          
          

          All the contrib tests including (LdapIpDirFilter, AuthorizationFilter) seems to run successfully with the revised patch (HDFS-481.patch). Attached logs from test-patch and test-contrib runs.

          Nicholas, Will exclude white space changes from the patch and re-attach for review.

          Show
          Srikanth Sundarrajan added a comment - Patch already includes changes to build.xml for pulling newer tomcat version (to run LdapIpDirFilter tests) @@ -299,7 +301,7 @@ <containerset> <cargo containerId="${tomcat.container.id}" timeout="30000" output="${logs.dir}/output.log" log="${logs.dir}/cargo.log"> <zipUrlInstaller - installUrl="http://apache.osuosl.org/tomcat/tomcat-6/v6.0.18/bin/apache-tomcat-6.0.18.zip" + installUrl="http://apache.osuosl.org/tomcat/tomcat-6/v6.0.24/bin/apache-tomcat-6.0.24.zip" installDir="${target.dir}/${tomcat.container.id}"/> <configuration type="existing" home="${tomcatconfig.dir}"> <property name="cargo.servlet.port" value="${cargo.servlet.http.port}"/> All the contrib tests including (LdapIpDirFilter, AuthorizationFilter) seems to run successfully with the revised patch ( HDFS-481 .patch). Attached logs from test-patch and test-contrib runs. Nicholas, Will exclude white space changes from the patch and re-attach for review.
          Hide
          Srikanth Sundarrajan added a comment -

          Updated patch after removing all white space/indentation changes. Patch is identical to the earlier one otherwise.

          Show
          Srikanth Sundarrajan added a comment - Updated patch after removing all white space/indentation changes. Patch is identical to the earlier one otherwise.
          Hide
          Srikanth Sundarrajan added a comment -

          @Srikanth

          • Is you patch still fixing the bugs stated in the description?

          Patch includes fix for all the bugs reported in this JIRA except for

          ssl.client.do.not.authenticate.server setting can only be set by hdfs's configuration files, need to move this setting to ssl-client.xml.

          Patch for this has been uploaded to HDFS-482 and maked as patch-available.

          Show
          Srikanth Sundarrajan added a comment - @Srikanth Is you patch still fixing the bugs stated in the description? Patch includes fix for all the bugs reported in this JIRA except for ssl.client.do.not.authenticate.server setting can only be set by hdfs's configuration files, need to move this setting to ssl-client.xml. Patch for this has been uploaded to HDFS-482 and maked as patch-available.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > Patch already includes changes to build.xml for pulling newer tomcat version ...

          We should fix the build problem in HDFS-1046 first since it also affects other contributors.

          Show
          Tsz Wo Nicholas Sze added a comment - > Patch already includes changes to build.xml for pulling newer tomcat version ... We should fix the build problem in HDFS-1046 first since it also affects other contributors.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > Patch includes fix for all the bugs reported in this JIRA except for ...

          Thanks, Srikanth. Could you update the description and the summary of this issue to reflect all the changes? (If it is not too hard, it would be great if you could separate the divide patch to some other issues like HDFS-1009. In general, a JIRA should only fix one issue.)

          Show
          Tsz Wo Nicholas Sze added a comment - > Patch includes fix for all the bugs reported in this JIRA except for ... Thanks, Srikanth. Could you update the description and the summary of this issue to reflect all the changes? (If it is not too hard, it would be great if you could separate the divide patch to some other issues like HDFS-1009 . In general, a JIRA should only fix one issue.)
          Hide
          Srikanth Sundarrajan added a comment -

          Summary of Changes:

          1. ProxyFileDataServlet, ProxyListPathsServlet, ProxyFileForward - Use createProxyUser instead of createRemoteUser to obtain UGI for the requesting user, name.conf - context attribute is set by LdapIpDirFilter

          2. LdapIpDirFilter - Removed class members userId, groupName and Paths and these are now set for each request through LdapEntry (a private inner class)

          3. KerberosAuthorizationFilter - Accessing proxy user keytab file for credentials and initializing UGI

          4. LdapIpDirFilter + AuthorizationFilter - Separated IP based authentication and path authorization into two independent filters. IP based authentication is done by LdapIpDirFilter and Path authroization is implemented through AuthorizationFilter.

          5. TestLdapIpDirFilter + TestAuthorizationFilter - IP based test cases retained in TestLdapIpDirFilter and path test cases are moved to TestAuthorizationFilter

          6. ProxyUtil - Added methods to create proxy user and getting namenode url from Hadoop configuration

          7. hdfsproxy-default.xml - Including new security related attributes

          8. tomcat-web.xml - Adding additional filter for Authroization. Allowing LdapIpDirFilter & KerberosAuthroizationFilter to be processed for forward and request methods

          9. build.xml - Including TestAuthroizationFilter for cactus based unit tests, Also increasing verbosity level for logs during build

          10. ProxyForwardServlet - Fix for infinite looping by verifying if the context is same as the current and aborting

          11. TestProxyUtil & TestHdfsProxy - Fixes to get the tests to run

          Show
          Srikanth Sundarrajan added a comment - Summary of Changes: 1. ProxyFileDataServlet, ProxyListPathsServlet, ProxyFileForward - Use createProxyUser instead of createRemoteUser to obtain UGI for the requesting user, name.conf - context attribute is set by LdapIpDirFilter 2. LdapIpDirFilter - Removed class members userId, groupName and Paths and these are now set for each request through LdapEntry (a private inner class) 3. KerberosAuthorizationFilter - Accessing proxy user keytab file for credentials and initializing UGI 4. LdapIpDirFilter + AuthorizationFilter - Separated IP based authentication and path authorization into two independent filters. IP based authentication is done by LdapIpDirFilter and Path authroization is implemented through AuthorizationFilter. 5. TestLdapIpDirFilter + TestAuthorizationFilter - IP based test cases retained in TestLdapIpDirFilter and path test cases are moved to TestAuthorizationFilter 6. ProxyUtil - Added methods to create proxy user and getting namenode url from Hadoop configuration 7. hdfsproxy-default.xml - Including new security related attributes 8. tomcat-web.xml - Adding additional filter for Authroization. Allowing LdapIpDirFilter & KerberosAuthroizationFilter to be processed for forward and request methods 9. build.xml - Including TestAuthroizationFilter for cactus based unit tests, Also increasing verbosity level for logs during build 10. ProxyForwardServlet - Fix for infinite looping by verifying if the context is same as the current and aborting 11. TestProxyUtil & TestHdfsProxy - Fixes to get the tests to run
          Hide
          Srikanth Sundarrajan added a comment -

          If it is not too hard, it would be great if you could separate the divide patch to some other issues like HDFS-1009. In general, a JIRA should only fix one issue

          Proxy will not be fully functional and compatible with HDFS (kerberos based setup). Hence moved the fixes for 1009 into this JIRA. Related changes are

          1. Inclusion of KerberosAuthorizationFilter which extends AuthorizationFilter
          2. web.xml to include KerberosAuthorizationFilter instead of the default AuthorizationFilter

          Have listed the changed files in the patch along with a brief summary of what the change is meant for.

          Show
          Srikanth Sundarrajan added a comment - If it is not too hard, it would be great if you could separate the divide patch to some other issues like HDFS-1009 . In general, a JIRA should only fix one issue Proxy will not be fully functional and compatible with HDFS (kerberos based setup). Hence moved the fixes for 1009 into this JIRA. Related changes are 1. Inclusion of KerberosAuthorizationFilter which extends AuthorizationFilter 2. web.xml to include KerberosAuthorizationFilter instead of the default AuthorizationFilter Have listed the changed files in the patch along with a brief summary of what the change is meant for.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Srikanth, thank you for the update. I am looking forward to review your new patch.

          Show
          Tsz Wo Nicholas Sze added a comment - Srikanth, thank you for the update. I am looking forward to review your new patch.
          Hide
          Srikanth Sundarrajan added a comment -

          Revised patch for 481 (to exclude changes reported as HDFS-1074)

          Show
          Srikanth Sundarrajan added a comment - Revised patch for 481 (to exclude changes reported as HDFS-1074 )
          Hide
          Tsz Wo Nicholas Sze added a comment -

          +1 patch looks good.

          Show
          Tsz Wo Nicholas Sze added a comment - +1 patch looks good.
          Hide
          Srikanth Sundarrajan added a comment -

          Output from test-patch

          [exec] +1 overall.
          [exec]
          [exec] +1 @author. The patch does not contain any @author tags.
          [exec]
          [exec] +1 tests included. The patch appears to include 15 new or modified tests.
          [exec]
          [exec] +1 javadoc. The javadoc tool did not generate any warning messages.
          [exec]
          [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings.
          [exec]
          [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings.
          [exec]
          [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings.

          test-contrib:

          test:
          BUILD SUCCESSFUL

          Show
          Srikanth Sundarrajan added a comment - Output from test-patch [exec] +1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 15 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. test-contrib: test: BUILD SUCCESSFUL
          Hide
          Tsz Wo Nicholas Sze added a comment -

          I also have tested it locally. It worked fine.

          I have committed this. Thanks, Srikanth!

          Show
          Tsz Wo Nicholas Sze added a comment - I also have tested it locally. It worked fine. I have committed this. Thanks, Srikanth!
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #230 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/230/)
          . hdfsproxy: Bug Fixes + HdfsProxy to use proxy user to impresonate the real user. Contributed by Srikanth

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #230 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/230/ ) . hdfsproxy: Bug Fixes + HdfsProxy to use proxy user to impresonate the real user. Contributed by Srikanth
          Hide
          Srikanth Sundarrajan added a comment -

          Revised backport for yhadoop20 patch in sync with latest trunk patch

          Show
          Srikanth Sundarrajan added a comment - Revised backport for yhadoop20 patch in sync with latest trunk patch
          Hide
          Srikanth Sundarrajan added a comment -

          Revised patch for yhadoop20s in sync with trunk patch.

          Show
          Srikanth Sundarrajan added a comment - Revised patch for yhadoop20s in sync with trunk patch.
          Hide
          Srikanth Sundarrajan added a comment -

          Backport patch for y20.1xx

          Show
          Srikanth Sundarrajan added a comment - Backport patch for y20.1xx
          Hide
          Srikanth Sundarrajan added a comment -

          Incremental back port to fix broken unit tests in y20.1xx & y20.101. Tests are broken due to

          • Missing super user setup when Mini DFS Cluster starts
          • Missing src/test/resources folder
          • UserGroupInformation class depending on krb5.conf in the system. (Bypassing that through krb5.conf in $ {hadoop.core}

            /src/test - contrib/hdfsproxy/build.xml change)

          This patch needs to be applied incrementally over the HDFS-481-NEW.patch.

          Show
          Srikanth Sundarrajan added a comment - Incremental back port to fix broken unit tests in y20.1xx & y20.101. Tests are broken due to Missing super user setup when Mini DFS Cluster starts Missing src/test/resources folder UserGroupInformation class depending on krb5.conf in the system. (Bypassing that through krb5.conf in $ {hadoop.core} /src/test - contrib/hdfsproxy/build.xml change) This patch needs to be applied incrementally over the HDFS-481 -NEW.patch.

            People

            • Assignee:
              Srikanth Sundarrajan
              Reporter:
              zhiyong zhang
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development