Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.21.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      We plan to add Tomcat support to hdfsproxy since Tomcat has good production support at Yahoo.

      1. HADOOP-5023.patch
        125 kB
        zhiyong zhang
      2. HADOOP-5023.patch
        132 kB
        zhiyong zhang
      3. HADOOP-5023.patch
        125 kB
        zhiyong zhang
      4. HADOOP-5023.patch
        130 kB
        zhiyong zhang
      5. HADOOP-5023.patch
        129 kB
        zhiyong zhang
      6. hudson-snip.txt
        3 kB
        Chris Douglas
      7. HADOOP-5023.patch
        129 kB
        zhiyong zhang
      8. HADOOP-5023.patch
        139 kB
        zhiyong zhang
      9. HADOOP-5023-a.patch
        6 kB
        zhiyong zhang

        Activity

        Hide
        Hudson added a comment -
        Show
        Hudson added a comment - Integrated in Hadoop-trunk #778 (See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/778/ )
        Hide
        zhiyong zhang added a comment -

        Hi Nigel,

        I uploaded a new patch to dynamically probe free ports for cactus unit test.

        But in order to run the patch, the file src/contrib/hdfsproxy/bin/find_free_port.sh needs to be executable.

        Chris may have control over it since I cannot make it executable when patch it although it is executable in my local tree.

        Zhiyong

        Thanks.

        Show
        zhiyong zhang added a comment - Hi Nigel, I uploaded a new patch to dynamically probe free ports for cactus unit test. But in order to run the patch, the file src/contrib/hdfsproxy/bin/find_free_port.sh needs to be executable. Chris may have control over it since I cannot make it executable when patch it although it is executable in my local tree. Zhiyong Thanks.
        Hide
        zhiyong zhang added a comment -

        Added find free port and dynamically update server.xml conf file functionality for unit tests.

        please make src/contrib/hdfsproxy/bin/find_free_port.sh executable when commit.

        Show
        zhiyong zhang added a comment - Added find free port and dynamically update server.xml conf file functionality for unit tests. please make src/contrib/hdfsproxy/bin/find_free_port.sh executable when commit.
        Hide
        zhiyong zhang added a comment -

        I'll try to submit a patch to probe free available ports and re-write the conf file dynamically with the free available ports today though.

        Show
        zhiyong zhang added a comment - I'll try to submit a patch to probe free available ports and re-write the conf file dynamically with the free available ports today though.
        Hide
        zhiyong zhang added a comment -

        In the above report.
        Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/14/testReport/
        There is no failing on hdfsproxy.
        I think the new tests #769 is running in different environment, where some ports are already (8087 or 8205) occupied.

        Should all hudson tests be running in clean environment where no other services are running?

        Show
        zhiyong zhang added a comment - In the above report. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/14/testReport/ There is no failing on hdfsproxy. I think the new tests #769 is running in different environment, where some ports are already (8087 or 8205) occupied. Should all hudson tests be running in clean environment where no other services are running?
        Hide
        Nigel Daley added a comment -

        Zhiyong, if this can't be fixed on Monday, I'd propose we revert this patch until it behaves on our nightly/patch test machine.

        Show
        Nigel Daley added a comment - Zhiyong, if this can't be fixed on Monday, I'd propose we revert this patch until it behaves on our nightly/patch test machine.
        Hide
        zhiyong zhang added a comment -

        The failure is most likely due to port 8087 is occupied. The container will get time-out to start.

        Show
        zhiyong zhang added a comment - The failure is most likely due to port 8087 is occupied. The container will get time-out to start.
        Hide
        Ramya Sunil added a comment -

        Due to the above patch, build failures are occurring on trunk consistently (http://hudson.zones.apache.org/hudson/view/Hadoop/job/Hadoop-trunk/769/console). Below is the output.
        BUILD FAILED
        /home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build.xml:773: The following error occurred while executing this line:
        /home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
        /home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/src/contrib/hdfsproxy/build.xml:150: Failed to start the container after more than [180000] ms. Trying to connect to the http://localhost:8087/test/ServletRedirector?Cactus_Service=RUN_TEST test URL yielded a [-1] error code. Please run in debug mode for more details about the error.

        Can we please reopen the issue?

        Show
        Ramya Sunil added a comment - Due to the above patch, build failures are occurring on trunk consistently ( http://hudson.zones.apache.org/hudson/view/Hadoop/job/Hadoop-trunk/769/console ). Below is the output. BUILD FAILED /home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build.xml:773: The following error occurred while executing this line: /home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line: /home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/src/contrib/hdfsproxy/build.xml:150: Failed to start the container after more than [180000] ms. Trying to connect to the http://localhost:8087/test/ServletRedirector?Cactus_Service=RUN_TEST test URL yielded a [-1] error code. Please run in debug mode for more details about the error. Can we please reopen the issue?
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12401061/HADOOP-5023.patch
        against trunk revision 748714.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 65 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs warnings.

        +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

        -1 release audit. The applied patch generated 807 release audit warnings (more than the trunk's current 794 warnings).

        -1 core tests. The patch failed core unit tests.

        -1 contrib tests. The patch failed contrib unit tests.

        Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/14/testReport/
        Release audit warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/14/artifact/trunk/current/releaseAuditDiffWarnings.txt
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/14/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/14/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/14/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12401061/HADOOP-5023.patch against trunk revision 748714. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 65 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 Eclipse classpath. The patch retains Eclipse classpath integrity. -1 release audit. The applied patch generated 807 release audit warnings (more than the trunk's current 794 warnings). -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/14/testReport/ Release audit warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/14/artifact/trunk/current/releaseAuditDiffWarnings.txt Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/14/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/14/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/14/console This message is automatically generated.
        Hide
        Chris Douglas added a comment -

        I committed this. Thanks, Zhiyong

        Show
        Chris Douglas added a comment - I committed this. Thanks, Zhiyong
        Hide
        zhiyong zhang added a comment -

        Added License File to remove some Audit warnings.

        Test Result:
        Unit Test:
        [cactus] Tomcat 5.x starting...
        Server [Apache-Coyote/1.1] started
        [cactus] Running org.apache.hadoop.hdfsproxy.TestHdfsProxy
        [cactus] Tomcat 5.x started on port [8087]
        [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 15.135 sec
        [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
        [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.543 sec
        [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUgiManager
        [cactus] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.251 sec
        [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
        [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.842 sec
        [cactus] Tomcat 5.x is stopping...
        [cactus] Tomcat 5.x is stopped

        Test-Patch result:

        [exec] -1 overall.
        [exec]
        [exec] +1 @author. The patch does not contain any @author tags.
        [exec]
        [exec] +1 tests included. The patch appears to include 65 new or modified tests.
        [exec]
        [exec] +1 javadoc. The javadoc tool did not generate any warning messages.
        [exec]
        [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings.
        [exec]
        [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings.
        [exec]
        [exec] +1 Eclipse classpath. The patch retains Eclipse classpath integrity.
        [exec]
        [exec] -1 release audit. The applied patch generated 796 release audit warnings (more than the trunk's current 794 warnings).
        [exec]
        [exec]
        [exec]
        [exec]
        [exec] ======================================================================
        [exec] ======================================================================
        [exec] Finished build.

        Show
        zhiyong zhang added a comment - Added License File to remove some Audit warnings. Test Result: Unit Test: [cactus] Tomcat 5.x starting... Server [Apache-Coyote/1.1] started [cactus] Running org.apache.hadoop.hdfsproxy.TestHdfsProxy [cactus] Tomcat 5.x started on port [8087] [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 15.135 sec [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.543 sec [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUgiManager [cactus] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.251 sec [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.842 sec [cactus] Tomcat 5.x is stopping... [cactus] Tomcat 5.x is stopped Test-Patch result: [exec] -1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 65 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 Eclipse classpath. The patch retains Eclipse classpath integrity. [exec] [exec] -1 release audit. The applied patch generated 796 release audit warnings (more than the trunk's current 794 warnings). [exec] [exec] [exec] [exec] [exec] ====================================================================== [exec] ====================================================================== [exec] Finished build.
        Hide
        zhiyong zhang added a comment -

        Seems the Automation process didn't try the patch again due to it found the previous hudson-snip.txt file, even when ant was upgraded.

        I will re-submit the same patch again.

        Show
        zhiyong zhang added a comment - Seems the Automation process didn't try the patch again due to it found the previous hudson-snip.txt file, even when ant was upgraded. I will re-submit the same patch again.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12400919/hudson-snip.txt
        against trunk revision 748090.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 2 new or modified tests.

        -1 patch. The patch command could not apply the patch.

        Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta/6/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12400919/hudson-snip.txt against trunk revision 748090. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 2 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta/6/console This message is automatically generated.
        Hide
        zhiyong zhang added a comment -

        I think it is because the ant version. Can you check the ant version of hudson?

        I reproduce the same error with ant-1.7.0.

        With ant-1.7.1, there is no error.

        I think hudson should upgrade to ant-1.7.1, which the best available for now.

        Show
        zhiyong zhang added a comment - I think it is because the ant version. Can you check the ant version of hudson? I reproduce the same error with ant-1.7.0. With ant-1.7.1, there is no error. I think hudson should upgrade to ant-1.7.1, which the best available for now.
        Hide
        Chris Douglas added a comment -

        It looks like this is failing on hudson (starting at 269090 in the console)

        Show
        Chris Douglas added a comment - It looks like this is failing on hudson (starting at 269090 in the console)
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12400886/HADOOP-5023.patch
        against trunk revision 747608.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 65 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs warnings.

        +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

        -1 release audit. The applied patch generated 807 release audit warnings (more than the trunk's current 794 warnings).

        -1 core tests. The patch failed core unit tests.

        -1 contrib tests. The patch failed contrib unit tests.

        Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3915/testReport/
        Release audit warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3915/artifact/trunk/current/releaseAuditDiffWarnings.txt
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3915/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3915/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3915/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12400886/HADOOP-5023.patch against trunk revision 747608. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 65 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 Eclipse classpath. The patch retains Eclipse classpath integrity. -1 release audit. The applied patch generated 807 release audit warnings (more than the trunk's current 794 warnings). -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3915/testReport/ Release audit warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3915/artifact/trunk/current/releaseAuditDiffWarnings.txt Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3915/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3915/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3915/console This message is automatically generated.
        Hide
        zhiyong zhang added a comment - - edited

        For the unitTest parameter problem.
        I changed the code to:

        if (rqst.getScheme().equalsIgnoreCase("http") && rqst.getParameter("UnitTest") != null) unitTest = true;

        similar to the previous version. In production deploy, the http port (or http Connector) is disabled. This should avoid the problem Chris mentioned.

        For the user-cert.xml and user-permission.xml, I removed the part that I added for testing purpose. Now it's all clean

        The new patch should address all the issues Chris mentioned.

        Show
        zhiyong zhang added a comment - - edited For the unitTest parameter problem. I changed the code to: if (rqst.getScheme().equalsIgnoreCase("http") && rqst.getParameter("UnitTest") != null) unitTest = true; similar to the previous version. In production deploy, the http port (or http Connector) is disabled. This should avoid the problem Chris mentioned. For the user-cert.xml and user-permission.xml, I removed the part that I added for testing purpose. Now it's all clean The new patch should address all the issues Chris mentioned.
        Hide
        zhiyong zhang added a comment -

        Hi,

        I just realized I can use $HADOOP_ROOT/build/classes instead of using hadoop jar file to build the war file. This removes the dependency of "ant tar", as $HADOOP_ROOT/build/classes will be built if you run test.

        The new patch should address all the concerns from Chris.

        You can run ant test without having to run anything first.

        For the user-cert.xml and user-permission.xml, they are needed for the war file and cannot be moved to the test dir.

        Thanks.

        Show
        zhiyong zhang added a comment - Hi, I just realized I can use $HADOOP_ROOT/build/classes instead of using hadoop jar file to build the war file. This removes the dependency of "ant tar", as $HADOOP_ROOT/build/classes will be built if you run test. The new patch should address all the concerns from Chris. You can run ant test without having to run anything first. For the user-cert.xml and user-permission.xml, they are needed for the war file and cannot be moved to the test dir. Thanks.
        Hide
        Chris Douglas added a comment -

        Did you run "ant tar" first in the $HADOOP_ROOT dir. hadoop-core-*.jar is needed for the war file. We could have used a static lib dir to hold a pre-compiled hadoop jar, but that would bring some redundancy as a whole. I am not sure in this case which way should we go. Either have a static hadoop jar to make the proxy code more independent but add some redundancy or have some dependency on the hadoop build but have a cleaner structure. I follow the latter in the patch. I can change if necessary.

        I didn't, but that's only half of the objection (even after running ant all-jars, even ant tar the test still fails). The other half is that, running with -Dtestcase=FOO, the only tests that run should be those named FOO. The error I'm seeing:

        test:
             [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
           [cactus] -----------------------------------------------------------------
           [cactus] Running tests against Tomcat 5.x @ http://localhost:8087
           [cactus] -----------------------------------------------------------------
           [cactus] Deploying [/snip/hadoop/build/contrib/hdfsproxy/target/test.war] to \
                              [/snip/hadoop/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
           [cactus] Tomcat 5.x starting...
           [cactus] Tomcat 5.x started on port [8087]
        
        BUILD FAILED
        /snip/hadoop/build.xml:772: The following error occurred while executing this line:
        /snip/hadoop/src/contrib/build.xml:48: The following error occurred while executing this line:
        /snip/hadoop/src/contrib/hdfsproxy/build.xml:155: Failed to start the container after more than [180000] ms. \
          Trying to connect to the [http://localhost:8087/test/ServletRedirector?Cactus_Service=RUN_TEST] \
          test URL yielded a [-1] error code. Please run in debug mode for more details about the error.
        

        It is to differentiate normal requests and unit-test-case requests. It is generally not a good idea to have unit test code mingled in the source code. But in our case, as the cactus in-container unit test is still in its early stage, It cannot set the request attributes in the webRequest. Although it has a servletRequestWrapper, but the attributes set through that are not visible to the source code request. To successfully conduct unit test, we have to do some extra work in the source code to cooperate the unit test code.

        Sorry, my question was not clear. I mean: how does the servlet behave when this parameter is included? The previous version played a similar game, but the unit test functionality was (intended to be) disabled in the "real" version (the http Connector is not started). Since the proxy's purpose is (partially) authentication, I'm curious how this affects its operation in production, as the current trick isn't disabled in that case.

        These two files are also needed for normal functionality. User need to edit these two files for their purpose anyway. I don't think leaving them blank and having some default testing value make much difference.

        While it doesn't affect operation, it's not a readable example. It makes more sense to move this pathological example into the test dir, and put something more comprehensible in the conf dir.

        Show
        Chris Douglas added a comment - Did you run "ant tar" first in the $HADOOP_ROOT dir. hadoop-core-*.jar is needed for the war file. We could have used a static lib dir to hold a pre-compiled hadoop jar, but that would bring some redundancy as a whole. I am not sure in this case which way should we go. Either have a static hadoop jar to make the proxy code more independent but add some redundancy or have some dependency on the hadoop build but have a cleaner structure. I follow the latter in the patch. I can change if necessary. I didn't, but that's only half of the objection (even after running ant all-jars , even ant tar the test still fails). The other half is that, running with -Dtestcase=FOO , the only tests that run should be those named FOO. The error I'm seeing: test: [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests... [cactus] ----------------------------------------------------------------- [cactus] Running tests against Tomcat 5.x @ http://localhost:8087 [cactus] ----------------------------------------------------------------- [cactus] Deploying [/snip/hadoop/build/contrib/hdfsproxy/target/test.war] to \ [/snip/hadoop/build/contrib/hdfsproxy/target/tomcat-config/webapps]... [cactus] Tomcat 5.x starting... [cactus] Tomcat 5.x started on port [8087] BUILD FAILED /snip/hadoop/build.xml:772: The following error occurred while executing this line: /snip/hadoop/src/contrib/build.xml:48: The following error occurred while executing this line: /snip/hadoop/src/contrib/hdfsproxy/build.xml:155: Failed to start the container after more than [180000] ms. \ Trying to connect to the [http://localhost:8087/test/ServletRedirector?Cactus_Service=RUN_TEST] \ test URL yielded a [-1] error code. Please run in debug mode for more details about the error. It is to differentiate normal requests and unit-test-case requests. It is generally not a good idea to have unit test code mingled in the source code. But in our case, as the cactus in-container unit test is still in its early stage, It cannot set the request attributes in the webRequest. Although it has a servletRequestWrapper, but the attributes set through that are not visible to the source code request. To successfully conduct unit test, we have to do some extra work in the source code to cooperate the unit test code. Sorry, my question was not clear. I mean: how does the servlet behave when this parameter is included? The previous version played a similar game, but the unit test functionality was (intended to be) disabled in the "real" version (the http Connector is not started). Since the proxy's purpose is (partially) authentication, I'm curious how this affects its operation in production, as the current trick isn't disabled in that case. These two files are also needed for normal functionality. User need to edit these two files for their purpose anyway. I don't think leaving them blank and having some default testing value make much difference. While it doesn't affect operation, it's not a readable example. It makes more sense to move this pathological example into the test dir, and put something more comprehensible in the conf dir.
        Hide
        zhiyong zhang added a comment -
        • On my machine, running ant -Dtestcase=FOO test
          1. Still runs the unit test
          I didn't know tests should be able to be isolated by testcase. I just added that functionality.
          2. Fails (times out)
          Did you run "ant tar" first in the $HADOOP_ROOT dir. hadoop-core-*.jar is needed for the war file. We could have used a static lib dir to hold a pre-compiled hadoop jar, but that would bring some redundancy as a whole. I am not sure in this case which way should we go. Either have a static hadoop jar to make the proxy code more independent but add some redundancy or have some dependency on the hadoop build but have a cleaner structure. I follow the latter in the patch. I can change if necessary.
        • If the "UnitTest" parameter is in the URI query:

        + boolean unitTest = false;
        + if (rqst.getParameter("UnitTest") != null) unitTest = true;

        What does this do?

        These two lines are used to test if UnitTest parameter is in the query. It is to differentiate normal requests and unit-test-case requests. It is generally not a good idea to have unit test code mingled in the source code. But in our case, as the cactus in-container unit test is still in its early stage, It cannot set the request attributes in the webRequest. Although it has a servletRequestWrapper, but the attributes set through that are not visible to the source code request. To successfully conduct unit test, we have to do some extra work in the source code to cooperate the unit test code.

        • The content of hdfsproxy/conf/ {user-permissions,user-certs}

          appears to be for testing. If so, it belongs in hdfsproxy/src/test
          These two files are also needed for normal functionality. User need to edit these two files for their purpose anyway. I don't think leaving them blank and having some default testing value make much difference.

        • This should be removed from tomcat-web.xml:

        + <!--<context-param>
        + <param-name>hdfsproxy.dfs.namenode.address</param-name>
        + <param-value>ucdev19.inktomisearch.com:54321</param-value>
        + <description> name node address </description>
        + </context-param>-->

        Is this value duplicated in the hdfsproxy configuration and the tomcat configuration?

        Yes, as you can see, this property is commented out with <!-- -->. I need to delete it.

        • These dependencies look suspect (ivy.xml). Are they necessary?

        + <dependency org="httpunit" name="httpunit" rev="1.6" conf="common->master"/>
        + <dependency org="htmlunit" name="htmlunit" rev="1.10" conf="common->master"/>
        +
        + <dependency org="aspectj" name="aspectjrt" rev="1.5.3" conf="common->master"/>

        aspectj is needed for cactus.
        I've tried httpunit for some tests, then didn't use it in the end. I forgot to comment out/ remove these two lines in the ivy file.

        • I'm not sure ProxyUtils::sendCommand should be a public method. Given that it will overwrite several system properties and no tool has a demonstrable need for it yet, it would be better if it were package-private for now.

        I think it is a good idea to make it package-private so that you can still do unit test.

        Hi, can you do ant tar first, then run the test again to see if you can run the test? Or you want me to add a static hadoop jar?

        Thank you.

        Show
        zhiyong zhang added a comment - On my machine, running ant -Dtestcase=FOO test 1. Still runs the unit test I didn't know tests should be able to be isolated by testcase. I just added that functionality. 2. Fails (times out) Did you run "ant tar" first in the $HADOOP_ROOT dir. hadoop-core-*.jar is needed for the war file. We could have used a static lib dir to hold a pre-compiled hadoop jar, but that would bring some redundancy as a whole. I am not sure in this case which way should we go. Either have a static hadoop jar to make the proxy code more independent but add some redundancy or have some dependency on the hadoop build but have a cleaner structure. I follow the latter in the patch. I can change if necessary. If the "UnitTest" parameter is in the URI query: + boolean unitTest = false; + if (rqst.getParameter("UnitTest") != null) unitTest = true; What does this do? These two lines are used to test if UnitTest parameter is in the query. It is to differentiate normal requests and unit-test-case requests. It is generally not a good idea to have unit test code mingled in the source code. But in our case, as the cactus in-container unit test is still in its early stage, It cannot set the request attributes in the webRequest. Although it has a servletRequestWrapper, but the attributes set through that are not visible to the source code request. To successfully conduct unit test, we have to do some extra work in the source code to cooperate the unit test code. The content of hdfsproxy/conf/ {user-permissions,user-certs} appears to be for testing. If so, it belongs in hdfsproxy/src/test These two files are also needed for normal functionality. User need to edit these two files for their purpose anyway. I don't think leaving them blank and having some default testing value make much difference. This should be removed from tomcat-web.xml: + <!--<context-param> + <param-name>hdfsproxy.dfs.namenode.address</param-name> + <param-value>ucdev19.inktomisearch.com:54321</param-value> + <description> name node address </description> + </context-param>--> Is this value duplicated in the hdfsproxy configuration and the tomcat configuration? Yes, as you can see, this property is commented out with <!-- -->. I need to delete it. These dependencies look suspect (ivy.xml). Are they necessary? + <dependency org="httpunit" name="httpunit" rev="1.6" conf="common->master"/> + <dependency org="htmlunit" name="htmlunit" rev="1.10" conf="common->master"/> + + <dependency org="aspectj" name="aspectjrt" rev="1.5.3" conf="common->master"/> aspectj is needed for cactus. I've tried httpunit for some tests, then didn't use it in the end. I forgot to comment out/ remove these two lines in the ivy file. I'm not sure ProxyUtils::sendCommand should be a public method. Given that it will overwrite several system properties and no tool has a demonstrable need for it yet, it would be better if it were package-private for now. I think it is a good idea to make it package-private so that you can still do unit test. Hi, can you do ant tar first, then run the test again to see if you can run the test? Or you want me to add a static hadoop jar? Thank you.
        Hide
        Chris Douglas added a comment -
        • On my machine, running ant -Dtestcase=FOO test
          1. Still runs the unit test
          2. Fails (times out)

        For (1), all modules should respect the testcase var (Chukwa has an example). If FOO isn't a test in your suite, it should not run. Similarly, ant -Dtestcase=TestHdfsProxy test should run only that test. (2) is a different issue altogether. Does this work for you on a fresh checkout?

        • If the "UnitTest" parameter is in the URI query:
          +    boolean unitTest = false;
          +    if (rqst.getParameter("UnitTest") != null) unitTest = true;
          

        What does this do?

        • The content of hdfsproxy/conf/{user-permissions,user-certs} appears to be for testing. If so, it belongs in hdfsproxy/src/test
        • This should be removed from tomcat-web.xml:
          +   <!--<context-param>
          +      <param-name>hdfsproxy.dfs.namenode.address</param-name>
          +      <param-value>ucdev19.inktomisearch.com:54321</param-value>
          +      <description> name node address </description>
          +    </context-param>-->
          

        Is this value duplicated in the hdfsproxy configuration and the tomcat configuration?

        • These dependencies look suspect (ivy.xml). Are they necessary?
          +   <dependency org="httpunit" name="httpunit" rev="1.6" conf="common->master"/>
          +   <dependency org="htmlunit" name="htmlunit" rev="1.10" conf="common->master"/>
          +
          +   <dependency org="aspectj" name="aspectjrt" rev="1.5.3" conf="common->master"/>
          
        • I'm not sure ProxyUtils::sendCommand should be a public method. Given that it will overwrite several system properties and no tool has a demonstrable need for it yet, it would be better if it were package-private for now.
        Show
        Chris Douglas added a comment - On my machine, running ant -Dtestcase=FOO test Still runs the unit test Fails (times out) For (1), all modules should respect the testcase var (Chukwa has an example). If FOO isn't a test in your suite, it should not run. Similarly, ant -Dtestcase=TestHdfsProxy test should run only that test. (2) is a different issue altogether. Does this work for you on a fresh checkout? If the "UnitTest" parameter is in the URI query: + boolean unitTest = false; + if (rqst.getParameter("UnitTest") != null) unitTest = true; What does this do? The content of hdfsproxy/conf/{user-permissions,user-certs} appears to be for testing. If so, it belongs in hdfsproxy/src/test This should be removed from tomcat-web.xml: + <!--<context-param> + <param-name>hdfsproxy.dfs.namenode.address</param-name> + <param-value>ucdev19.inktomisearch.com:54321</param-value> + <description> name node address </description> + </context-param>--> Is this value duplicated in the hdfsproxy configuration and the tomcat configuration? These dependencies look suspect (ivy.xml). Are they necessary? + <dependency org="httpunit" name="httpunit" rev="1.6" conf="common->master"/> + <dependency org="htmlunit" name="htmlunit" rev="1.10" conf="common->master"/> + + <dependency org="aspectj" name="aspectjrt" rev="1.5.3" conf="common->master"/> I'm not sure ProxyUtils::sendCommand should be a public method. Given that it will overwrite several system properties and no tool has a demonstrable need for it yet, it would be better if it were package-private for now.
        Hide
        zhiyong zhang added a comment -

        Removed
        <dependency org="xerces" name="xercesImpl" rev="2.6.2" conf="common->master"/>
        in the new patch.

        as it will generate exception:
        Exception in thread "main" java.lang.UnsupportedOperationException: This parser does not support specification "null" version "null"

        It does not happen in the old trunk. As I updated to the new trunk. It happens when I run unit test. Maybe some part was updated in the new trunk for the SAX parser.

        Another note.

        When run unit test, need to run "ant tar" first in the hadoop dir as hdfsproxy war and test depend on the hadoop-core-*.jar file in the build dir.

        We could have copied hadoop-*.jar a directory under hdfsproxy and make it static as chukwa does, but that would introduce some redundancies.

        zhiyong.

        Show
        zhiyong zhang added a comment - Removed <dependency org="xerces" name="xercesImpl" rev="2.6.2" conf="common->master"/> in the new patch. as it will generate exception: Exception in thread "main" java.lang.UnsupportedOperationException: This parser does not support specification "null" version "null" It does not happen in the old trunk. As I updated to the new trunk. It happens when I run unit test. Maybe some part was updated in the new trunk for the SAX parser. Another note. When run unit test, need to run "ant tar" first in the hadoop dir as hdfsproxy war and test depend on the hadoop-core-*.jar file in the build dir. We could have copied hadoop-*.jar a directory under hdfsproxy and make it static as chukwa does, but that would introduce some redundancies. zhiyong.
        Hide
        zhiyong zhang added a comment -

        Since previous patch doesn't check in .keystore file as svn identify .keystore as binary.

        updated with current patch with the help of svn propset

        The current patch has keystore file checked in.

        Thanks.

        Show
        zhiyong zhang added a comment - Since previous patch doesn't check in .keystore file as svn identify .keystore as binary. updated with current patch with the help of svn propset The current patch has keystore file checked in. Thanks.
        Hide
        zhiyong zhang added a comment -

        Add the following changes in this patch.

        1. changed build.xml file to be able to run unit test with/without setting environment.

        2. restructured the folders so that files for unit testing purpose are separated from files used to generate war.

        3. generated testing keystore and certificate with 20 years expiration date for SSL testing.

        4. set up cactus ant task so that tomcat in-container test will use existing tomcat-conf files, by doing this, the system could do both in-container http-test and https-test.

        5. wrote unit test code for ProxyFilter.java for both http and ssl-https cases.

        6. separate original HdfsProxy.java code into two parts: stand-alone hdfsproxy start-up part and command line operations part (ProxyUtil.java). ProxyUtil.java now becomes a client-side tool instead of proxy-server-side tool, it can be used for both jetty and tomcat to send clearCache and reloadPermFile commands.

        7. wrote unit-test code proxyUtil.java with SSL connection settings.

        8. added corresponding start-up /shut-down scripts under bin directory to start and shutdown proxy for tomcat case.

        Show
        zhiyong zhang added a comment - Add the following changes in this patch. 1. changed build.xml file to be able to run unit test with/without setting environment. 2. restructured the folders so that files for unit testing purpose are separated from files used to generate war. 3. generated testing keystore and certificate with 20 years expiration date for SSL testing. 4. set up cactus ant task so that tomcat in-container test will use existing tomcat-conf files, by doing this, the system could do both in-container http-test and https-test. 5. wrote unit test code for ProxyFilter.java for both http and ssl-https cases. 6. separate original HdfsProxy.java code into two parts: stand-alone hdfsproxy start-up part and command line operations part (ProxyUtil.java). ProxyUtil.java now becomes a client-side tool instead of proxy-server-side tool, it can be used for both jetty and tomcat to send clearCache and reloadPermFile commands. 7. wrote unit-test code proxyUtil.java with SSL connection settings. 8. added corresponding start-up /shut-down scripts under bin directory to start and shutdown proxy for tomcat case.
        Hide
        steve_l added a comment -

        Presumably the problem here is that a tomcat installation is required on the local machine? And then its install directory set in the env variable or adding env.HDFSPROXY_CONF_DIR to the build directories?

        1. we may be able to have Ivy pull down a copy of tomcat, though as it would require a full unzip and introduces some security issues, I'm not enthusiastic about this. I see that <cargo> does contain a URL to a tomcat version; without at least MD5 checking that's something to worry about.
        1. all the cactus tests should be skipped if the env variable is not set. <fileset> work can do that, relatively easily.
        1. Another option is functional testing with HtmlUnit. This can be used for testing all of the web sites, so is going to be need at some point in the future anyway. Instead of running junit tests remotely with via cactus, hit some test pages from the outside and see they return the expected response
        Show
        steve_l added a comment - Presumably the problem here is that a tomcat installation is required on the local machine? And then its install directory set in the env variable or adding env.HDFSPROXY_CONF_DIR to the build directories? we may be able to have Ivy pull down a copy of tomcat, though as it would require a full unzip and introduces some security issues, I'm not enthusiastic about this. I see that <cargo> does contain a URL to a tomcat version; without at least MD5 checking that's something to worry about. all the cactus tests should be skipped if the env variable is not set. <fileset> work can do that, relatively easily. Another option is functional testing with HtmlUnit. This can be used for testing all of the web sites, so is going to be need at some point in the future anyway. Instead of running junit tests remotely with via cactus, hit some test pages from the outside and see they return the expected response
        Hide
        Chris Douglas added a comment -

        I mean that the patch prevents one from running the unit tests unless HDFSPROXY_CONF_DIR is set, which cannot be committed. Most users and developers won't have this set, but are interested in running the unit tests (including Hudson). Setting required environment vars in ant- as Chukwa does- isn't great, but it's acceptable. It would also be acceptable to disable that unit test if the env var isn't set, but the chance of a regression is much higher.

        Show
        Chris Douglas added a comment - I mean that the patch prevents one from running the unit tests unless HDFSPROXY_CONF_DIR is set, which cannot be committed. Most users and developers won't have this set, but are interested in running the unit tests (including Hudson). Setting required environment vars in ant- as Chukwa does- isn't great, but it's acceptable. It would also be acceptable to disable that unit test if the env var isn't set, but the chance of a regression is much higher.
        Hide
        zhiyong zhang added a comment -

        Quote: Ah, I see. Unfortunately, this requires HDFSPROXY_CONF_DIR to be set to run "ant test", a prerequisite that cannot be committed.

        Chris. you mean we cannot have any prerequisite to run the "ant test". But I saw contrib/chukwa's build.xml file used "<property name="hadoop.home.dir" value="$

        {env.HADOOP_HOME}

        "/>" . the only difference is HADOOP_HOME was set in hadoop dir's build.xml file "<env key="HADOOP_HOME" value="$

        {basedir}

        "/>". What should we do in this case though? we need users to configure first before running the program. Shall we give a default HDFSPROXY_CONF_DIR or maybe ant has "if" like statement to enable us to set a default HDFSPROXY_CONF_DIR dir only if user does not set that environment?

        Show
        zhiyong zhang added a comment - Quote: Ah, I see. Unfortunately, this requires HDFSPROXY_CONF_DIR to be set to run "ant test", a prerequisite that cannot be committed. Chris. you mean we cannot have any prerequisite to run the "ant test". But I saw contrib/chukwa's build.xml file used "<property name="hadoop.home.dir" value="$ {env.HADOOP_HOME} "/>" . the only difference is HADOOP_HOME was set in hadoop dir's build.xml file "<env key="HADOOP_HOME" value="$ {basedir} "/>". What should we do in this case though? we need users to configure first before running the program. Shall we give a default HDFSPROXY_CONF_DIR or maybe ant has "if" like statement to enable us to set a default HDFSPROXY_CONF_DIR dir only if user does not set that environment?
        Hide
        Kan Zhang added a comment -

        > Kan has the context for this, so he can say whether it's an error for this parameter to be undefined.
        Yes, it is. The proxy should refuse to boot if this parameter is undefined.

        Show
        Kan Zhang added a comment - > Kan has the context for this, so he can say whether it's an error for this parameter to be undefined. Yes, it is. The proxy should refuse to boot if this parameter is undefined.
        Hide
        Chris Douglas added a comment -

        I didn't find place that tell me to use tab instead of space. Also, I thought I was supposed to modify CHANGES.txt, as stated in the guidelines.

        nod Sun's conventions explicitly don't specify whether tabs are excluded, but they are. I'll change the wiki to omit the reference to CHANGES.txt, thanks for pointing that out.

        I am not sure whether we should throw exception on this or set default. [snip] Which default value should I use if I am not supposed to use localhost?

        Setting the cause of the ServletException as an IOException (or even just throwing the ServletException) seems like the correct behavior to me. Kan has the context for this, so he can say whether it's an error for this parameter to be undefined.

        I've written the instruction on README.txt. you need to export HADOOP_CONF_DIR and HDFSPROXY_CONF_DIR correctly before you run ant war and ant test. I think you didn't set up HDFSPROXY_CONF_DIR envionment, that is what it complains about.

        Ah, I see. Unfortunately, this requires HDFSPROXY_CONF_DIR to be set to run "ant test", a prerequisite that cannot be committed.

        Show
        Chris Douglas added a comment - I didn't find place that tell me to use tab instead of space. Also, I thought I was supposed to modify CHANGES.txt, as stated in the guidelines. nod Sun's conventions explicitly don't specify whether tabs are excluded, but they are. I'll change the wiki to omit the reference to CHANGES.txt, thanks for pointing that out. I am not sure whether we should throw exception on this or set default. [snip] Which default value should I use if I am not supposed to use localhost? Setting the cause of the ServletException as an IOException (or even just throwing the ServletException) seems like the correct behavior to me. Kan has the context for this, so he can say whether it's an error for this parameter to be undefined. I've written the instruction on README.txt. you need to export HADOOP_CONF_DIR and HDFSPROXY_CONF_DIR correctly before you run ant war and ant test. I think you didn't set up HDFSPROXY_CONF_DIR envionment, that is what it complains about. Ah, I see. Unfortunately, this requires HDFSPROXY_CONF_DIR to be set to run " ant test ", a prerequisite that cannot be committed.
        Hide
        zhiyong zhang added a comment -

        Thanks Chris for your quick review.

        1. I am following the http://wiki.apache.org/hadoop/HowToContribute to make the changes. In http://wiki.apache.org/hadoop/HowToContribute, there is one line "Indent two spaces per level, not four." I didn't find place that tell me to use tab instead of space. Also, I thought I was supposed to modify CHANGES.txt, as stated in the guidelines. Maybe there are some other guidlines that are not written in that link. I will remove the commented-out code in ProxyFilter.java and build.xml.

        2. I agree with you that conf.get("hdfsproxy.dfs.namenode.address", "hdfs://localhost:50888") is more compact. I am not sure whether we should throw exception on this or set default. that part of code is in ProxyFilter's init() function, which implements Filter interface's init() function. In Fiter interface init() only throws ServletException. But seems our type of Exception is IOException. I am not sure it is appropriate or even legal to throw a ServletException in this case. Note that this part of code was not for testing. Which default value should I use if I am not supposed to use localhost?

        3. I agree that src/web/cactus-web.xml should be moved to src/test as it is only used for unit testing. The way it works is when you run unit test cactus will merge src/web/cactus-web.xml with your web.xml to generate a new web.xml, which set up all the redirect serlets, etc, for in-container testing. That is done automatically by cactus framework.

        4. I've written the instruction on README.txt. you need to export HADOOP_CONF_DIR and HDFSPROXY_CONF_DIR correctly before you run ant war and ant test. I think you didn't set up HDFSPROXY_CONF_DIR envionment, that is what it complains about.

        Thanks.

        Show
        zhiyong zhang added a comment - Thanks Chris for your quick review. 1. I am following the http://wiki.apache.org/hadoop/HowToContribute to make the changes. In http://wiki.apache.org/hadoop/HowToContribute , there is one line "Indent two spaces per level, not four." I didn't find place that tell me to use tab instead of space. Also, I thought I was supposed to modify CHANGES.txt, as stated in the guidelines. Maybe there are some other guidlines that are not written in that link. I will remove the commented-out code in ProxyFilter.java and build.xml. 2. I agree with you that conf.get("hdfsproxy.dfs.namenode.address", "hdfs://localhost:50888") is more compact. I am not sure whether we should throw exception on this or set default. that part of code is in ProxyFilter's init() function, which implements Filter interface's init() function. In Fiter interface init() only throws ServletException. But seems our type of Exception is IOException. I am not sure it is appropriate or even legal to throw a ServletException in this case. Note that this part of code was not for testing. Which default value should I use if I am not supposed to use localhost? 3. I agree that src/web/cactus-web.xml should be moved to src/test as it is only used for unit testing. The way it works is when you run unit test cactus will merge src/web/cactus-web.xml with your web.xml to generate a new web.xml, which set up all the redirect serlets, etc, for in-container testing. That is done automatically by cactus framework. 4. I've written the instruction on README.txt. you need to export HADOOP_CONF_DIR and HDFSPROXY_CONF_DIR correctly before you run ant war and ant test. I think you didn't set up HDFSPROXY_CONF_DIR envionment, that is what it complains about. Thanks.
        Hide
        Chris Douglas added a comment -

        A few procedural nits: per the guidelines, please use tabs instead of spaces, don't modify CHANGES.txt in the patch (this is done by the committer; otherwise, it will conflict with other patches), and commented-out code should be removed (ProxyFilter.java, build.xml).

        I have no knowledge of Tomcat, so someone else should evaluate that, but generally:

        • Defaults from the configuration are the second actual in the call, so:
          +    String nn = conf.get("hdfsproxy.dfs.namenode.address");
          +    if (nn == null) {
          +      nn = "hdfs://localhost:50888";
          +    }
          

          is equivalently: conf.get("hdfsproxy.dfs.namenode.address", "hdfs://localhost:50888"). That said, the default should not be localhost. If this is for testing, the test should set this in the configuration, and- Kan, please correct this if it's mistaken- the proxy should throw if this parameter is undefined.

        • If src/web/cactus-web.xml is required for testing, it should be in src/test, not src/web. I'm not sure I understand how it's used; none of its parameters are referenced in the unit test...
        • Attempting to run the unit test causes compilation failures on my machine (ant -Dtestcase=TestProxyFilter test):
          BUILD FAILED
          /snip/hadoop/build.xml:750: The following error occurred while executing this line:
          /snip/hadoop/src/contrib/build.xml:48: The following error occurred while executing this line:
          /snip/hadoop/src/contrib/hdfsproxy/build.xml:239: /snip/hadoop/src/contrib/hdfsproxy/${env.HDFSPROXY_CONF_DIR} not found.
          
        Show
        Chris Douglas added a comment - A few procedural nits: per the guidelines , please use tabs instead of spaces, don't modify CHANGES.txt in the patch (this is done by the committer; otherwise, it will conflict with other patches), and commented-out code should be removed (ProxyFilter.java, build.xml). I have no knowledge of Tomcat, so someone else should evaluate that, but generally: Defaults from the configuration are the second actual in the call, so: + String nn = conf.get("hdfsproxy.dfs.namenode.address"); + if (nn == null) { + nn = "hdfs://localhost:50888"; + } is equivalently: conf.get("hdfsproxy.dfs.namenode.address", "hdfs://localhost:50888") . That said, the default should not be localhost. If this is for testing, the test should set this in the configuration, and- Kan, please correct this if it's mistaken- the proxy should throw if this parameter is undefined. If src/web/cactus-web.xml is required for testing, it should be in src/test, not src/web. I'm not sure I understand how it's used; none of its parameters are referenced in the unit test... Attempting to run the unit test causes compilation failures on my machine ( ant -Dtestcase=TestProxyFilter test ): BUILD FAILED /snip/hadoop/build.xml:750: The following error occurred while executing this line: /snip/hadoop/src/contrib/build.xml:48: The following error occurred while executing this line: /snip/hadoop/src/contrib/hdfsproxy/build.xml:239: /snip/hadoop/src/contrib/hdfsproxy/${env.HDFSPROXY_CONF_DIR} not found.
        Hide
        zhiyong zhang added a comment - - edited

        Overall design.
        Kept jetty functionality. Users can still run jetty proxy server as before. (backward compatible)
        Added tomcat support. Users can generate war file and deploy the war file under tomcat webapps folder.
        Integrate cactus unit test and ivy package management.

        Changes Made in current patch.
        1. build.xml
        included ant target war, cactifywar, and test. users can "ant war" to generate war file to deploy under tomcat's webapps directory. they can also do "ant test" to invoke cactus test for the war file.

        2. ivy.xml and ivy/libraries.properties
        included all the necessary jar file dependencies for cactus test using ivy package management.

        3. src/java/..../ProxyFilter.java
        changed the init function to read the configuration files and set the name.node.address and name.conf properties.

        4. README.txt.
        added instructions of how to deploy under tomcat.

        New files added for current patch.

        1. src/web/ folder.
        it contains web.xml and cactus-web.xml for major servlet configurations and for cactus test respectively.
        2. src/test/..../TestProxyFilter.java
        a cactus test for proxyfilter.java code

        Show
        zhiyong zhang added a comment - - edited Overall design. Kept jetty functionality. Users can still run jetty proxy server as before. (backward compatible) Added tomcat support. Users can generate war file and deploy the war file under tomcat webapps folder. Integrate cactus unit test and ivy package management. Changes Made in current patch. 1. build.xml included ant target war, cactifywar, and test. users can "ant war" to generate war file to deploy under tomcat's webapps directory. they can also do "ant test" to invoke cactus test for the war file. 2. ivy.xml and ivy/libraries.properties included all the necessary jar file dependencies for cactus test using ivy package management. 3. src/java/..../ProxyFilter.java changed the init function to read the configuration files and set the name.node.address and name.conf properties. 4. README.txt. added instructions of how to deploy under tomcat. New files added for current patch. 1. src/web/ folder. it contains web.xml and cactus-web.xml for major servlet configurations and for cactus test respectively. 2. src/test/..../TestProxyFilter.java a cactus test for proxyfilter.java code
        Hide
        Kan Zhang added a comment -

        > Can you elaborate on what "good production support" means?
        Yahoo has standardized on an internally maintained version of Tomcat with custom modules and filters. There is a group who provide support and updates for it.

        > It seems reasonable to have an ant target that packages the hdfsproxy as a .war file that could be used in Tomcat.
        Yes, you're right. After talking with Zhiyong, I think we can keep Jetty and add support for Tomcat at the same time. I'll change the title of this JIRA accordingly.

        Show
        Kan Zhang added a comment - > Can you elaborate on what "good production support" means? Yahoo has standardized on an internally maintained version of Tomcat with custom modules and filters. There is a group who provide support and updates for it. > It seems reasonable to have an ant target that packages the hdfsproxy as a .war file that could be used in Tomcat. Yes, you're right. After talking with Zhiyong, I think we can keep Jetty and add support for Tomcat at the same time. I'll change the title of this JIRA accordingly.
        Hide
        Doug Cutting added a comment -

        > 1. The embedded feature of Jetty is not required as hdfsproxy is intended as a standalone server.

        That's a reason Jetty isn't absolutely required, not a reason to use Tomcat. And are you certain that we'd never want to bundle the hdfsproxy with some other daemon?

        > 2. Tomcat has good production support at Yahoo, whereas Jetty doesn't.

        Can you elaborate on what "good production support" means?

        It seems reasonable to have an ant target that packages the hdfsproxy as a .war file that could be used in Tomcat. Is more than that required to satisfy this?

        Show
        Doug Cutting added a comment - > 1. The embedded feature of Jetty is not required as hdfsproxy is intended as a standalone server. That's a reason Jetty isn't absolutely required, not a reason to use Tomcat. And are you certain that we'd never want to bundle the hdfsproxy with some other daemon? > 2. Tomcat has good production support at Yahoo, whereas Jetty doesn't. Can you elaborate on what "good production support" means? It seems reasonable to have an ant target that packages the hdfsproxy as a .war file that could be used in Tomcat. Is more than that required to satisfy this?
        Hide
        Kan Zhang added a comment -

        > There are some nice aspects of Jetty which would be lost
        I agree. If there is consensus on using Jetty for the open source version, I'm open to it. However, since Yahoo is going to use Tomcat internally, it would be easier if we don't have to deal with 2 containers when we add new features. On the other hand, servlet container interfaces are fairly standardized. If we need to switch back to Jetty in the future for some reason, it shouldn't be hard.

        Show
        Kan Zhang added a comment - > There are some nice aspects of Jetty which would be lost I agree. If there is consensus on using Jetty for the open source version, I'm open to it. However, since Yahoo is going to use Tomcat internally, it would be easier if we don't have to deal with 2 containers when we add new features. On the other hand, servlet container interfaces are fairly standardized. If we need to switch back to Jetty in the future for some reason, it shouldn't be hard.
        Hide
        steve_l added a comment -

        There are some nice aspects of Jetty which would be lost

        -very easy to manage
        -consistency with the rest of the codebase
        -lightweight installation

        I'm not expressing a preference for either, just making clear that Jetty is very convenient a lot of the time.

        Show
        steve_l added a comment - There are some nice aspects of Jetty which would be lost -very easy to manage -consistency with the rest of the codebase -lightweight installation I'm not expressing a preference for either, just making clear that Jetty is very convenient a lot of the time.

          People

          • Assignee:
            zhiyong zhang
            Reporter:
            Kan Zhang
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development