Hadoop Common
  1. Hadoop Common
  2. HADOOP-7688

When a servlet filter throws an exception in init(..), the Jetty server failed silently.

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.23.0, 0.24.0
    • Fix Version/s: 1.2.0, 3.0.0, 2.0.3-alpha, 0.23.11
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      When a servlet filter throws a ServletException in init(..), the exception is logged by Jetty but not re-throws to the caller. As a result, the Jetty server failed silently.

      1. HADOOP-7688-branch-1.patch
        3 kB
        Uma Maheswara Rao G
      2. HADOOP-7688-branch-2.patch
        2 kB
        Uma Maheswara Rao G
      3. HADOOP-7688.patch
        3 kB
        Uma Maheswara Rao G
      4. org.apache.hadoop.http.TestServletFilter-output.txt
        17 kB
        Tsz Wo Nicholas Sze
      5. filter-init-exception-test.patch
        0.8 kB
        Tsz Wo Nicholas Sze

        Issue Links

          Activity

          Hide
          Tsz Wo Nicholas Sze added a comment -

          The problem was discovered by Arpit in HDFS-2368.

          Show
          Tsz Wo Nicholas Sze added a comment - The problem was discovered by Arpit in HDFS-2368 .
          Hide
          Tsz Wo Nicholas Sze added a comment -

          filter-init-exception-test.patch: modified TestServletFilter to illustrate the problem.
          org.apache.hadoop.http.TestServletFilter-output.txt: test log

          Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.135 sec <<< FAILURE!
          testServletFilter(org.apache.hadoop.http.TestServletFilter)  Time elapsed: 1.792 sec  <<< FAILURE!
          java.lang.AssertionError: expected:</stacks> but was:<null>
          	at org.junit.Assert.fail(Assert.java:91)
          	at org.junit.Assert.failNotEquals(Assert.java:645)
          	at org.junit.Assert.assertEquals(Assert.java:126)
          	at org.junit.Assert.assertEquals(Assert.java:145)
          	at org.apache.hadoop.http.TestServletFilter.testServletFilter(TestServletFilter.java:133)
          	...
          
          Show
          Tsz Wo Nicholas Sze added a comment - filter-init-exception-test.patch: modified TestServletFilter to illustrate the problem. org.apache.hadoop.http.TestServletFilter-output.txt: test log Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.135 sec <<< FAILURE! testServletFilter(org.apache.hadoop.http.TestServletFilter) Time elapsed: 1.792 sec <<< FAILURE! java.lang.AssertionError: expected:</stacks> but was:<null> at org.junit.Assert.fail(Assert.java:91) at org.junit.Assert.failNotEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:126) at org.junit.Assert.assertEquals(Assert.java:145) at org.apache.hadoop.http.TestServletFilter.testServletFilter(TestServletFilter.java:133) ...
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Note the test did not fail on http server startup but it failed on the first client access.

          Show
          Tsz Wo Nicholas Sze added a comment - Note the test did not fail on http server startup but it failed on the first client access.
          Hide
          Uma Maheswara Rao G added a comment -

          Thanks Nicholas for the information. I will take a look on it.

          Thanks
          Uma

          Show
          Uma Maheswara Rao G added a comment - Thanks Nicholas for the information. I will take a look on it. Thanks Uma
          Hide
          Uma Maheswara Rao G added a comment -

          Hi Nicholas,

          I am not very sure about JETTY internal behavior.
          From the tests what i understood is, when we start the HTTPServer, it is trying to start all the contexts.
          For all the contexts it is invoking the Filter inits as well.

          As per your expectation, if init throws any exception, we need to rethrow it. Here caller will be the HTTPServer start method.
          Since JETTY internally handling that exceptions and failing the context startups, we may not be able to get that exception out (i could not find the way).

          One way of getting the the failure status of contexts is, we can get the contexts and check whether all contexts failed. if any of the context startup failed we can throw exception.
          some thing like below
          At the end of HttpServer start method.
          //This snippet is just for understanding the fix

             Handler[] handlers = webServer.getHandlers();
              for (int i = 0; i < handlers.length; i++) {
                if(false == handlers[i].isRunning()){
                  throw new IOException("Problem starting http server");
                }
              }
            

          This may leads to server (NN,JT,DN...) shutdown.

          But here concern is, when we consider Jetty as standalone server with multiple hosted applications, this solution doesn't fit.
          And also consider a case where load on startup is false, this solution may not work again.

          Please correct me if my understanding is incorrect.

          Thanks,
          Uma

          Show
          Uma Maheswara Rao G added a comment - Hi Nicholas, I am not very sure about JETTY internal behavior. From the tests what i understood is, when we start the HTTPServer, it is trying to start all the contexts. For all the contexts it is invoking the Filter inits as well. As per your expectation, if init throws any exception, we need to rethrow it. Here caller will be the HTTPServer start method. Since JETTY internally handling that exceptions and failing the context startups, we may not be able to get that exception out (i could not find the way). One way of getting the the failure status of contexts is, we can get the contexts and check whether all contexts failed. if any of the context startup failed we can throw exception. some thing like below At the end of HttpServer start method. //This snippet is just for understanding the fix Handler[] handlers = webServer.getHandlers(); for ( int i = 0; i < handlers.length; i++) { if ( false == handlers[i].isRunning()){ throw new IOException( "Problem starting http server" ); } } This may leads to server (NN,JT,DN...) shutdown. But here concern is, when we consider Jetty as standalone server with multiple hosted applications, this solution doesn't fit. And also consider a case where load on startup is false, this solution may not work again. Please correct me if my understanding is incorrect. Thanks, Uma
          Hide
          Steve Loughran added a comment -

          It's part of the Servlet API spec that a servlet may not get its init() call until the first HTTP request comes in. This stack trace implies that the same thing happens on filters.

          Looking at the Servlet 3.0 spec, all they say is "After deployment of the Web application, and before a request causes the container
          to access a Web resource, the container must locate the list of filters that must be
          applied to the Web resource as described below".

          This could imply that it's done when a request comes in. There's a valid reason for doing that -it allows people to set up filters after a servlet context is brought up, which is something I've done in the past.

          Are all the filters Hadoop-specific? If so they could have a base class that logs the exceptions, reports them. Then have a preheater that GETs all the servlet pages; that's always handy for compile-on-demand JSP pages anyway; if any page fails the preheater can report it.

          Show
          Steve Loughran added a comment - It's part of the Servlet API spec that a servlet may not get its init() call until the first HTTP request comes in. This stack trace implies that the same thing happens on filters. Looking at the Servlet 3.0 spec, all they say is "After deployment of the Web application, and before a request causes the container to access a Web resource, the container must locate the list of filters that must be applied to the Web resource as described below". This could imply that it's done when a request comes in. There's a valid reason for doing that -it allows people to set up filters after a servlet context is brought up, which is something I've done in the past. Are all the filters Hadoop-specific? If so they could have a base class that logs the exceptions, reports them. Then have a preheater that GETs all the servlet pages; that's always handy for compile-on-demand JSP pages anyway; if any page fails the preheater can report it.
          Hide
          Uma Maheswara Rao G added a comment -

          Thanks a lot for taking a look!

          Sorry for the late in reply. I was on sudden leave. will take up my tasks from this Monday.

          I did not find any configuration item for controlling of Filter loading.
          I know we can control the Servlet loading by using load-on-startup property.

          I think Filter init methods will be called when loading the web contexts (or APPS).

          In this case, When loading the web contexts, if Filter init api throws any exception, application loading is failing. Because of one application load failure, server can not be shutdown in ideal case. So, it is just continuing.
          When first request comes, it is giving 503 response.

          In Ideal case server should not shutdown, so it is continuing. But in Hadoop specific case i think we can thow exception if any of the context loading fails.

          Nicholas, can you please give your opinion as well?

          Thanks
          Uma

          Show
          Uma Maheswara Rao G added a comment - Thanks a lot for taking a look! Sorry for the late in reply. I was on sudden leave. will take up my tasks from this Monday. I did not find any configuration item for controlling of Filter loading. I know we can control the Servlet loading by using load-on-startup property. I think Filter init methods will be called when loading the web contexts (or APPS). In this case, When loading the web contexts, if Filter init api throws any exception, application loading is failing. Because of one application load failure, server can not be shutdown in ideal case. So, it is just continuing. When first request comes, it is giving 503 response. In Ideal case server should not shutdown, so it is continuing. But in Hadoop specific case i think we can thow exception if any of the context loading fails. Nicholas, can you please give your opinion as well? Thanks Uma
          Hide
          Tsz Wo Nicholas Sze added a comment -

          How about pinging itself once the http server has started?

          Show
          Tsz Wo Nicholas Sze added a comment - How about pinging itself once the http server has started?
          Hide
          Uma Maheswara Rao G added a comment -

          Thanks Nicholas for taking a look!

          Did you see my first comment?

          Handler[] handlers = webServer.getHandlers();
              for (int i = 0; i < handlers.length; i++) {
                if(false == handlers[i].isRunning()){
                  throw new IOException("Problem starting http server");
                }
              }

          Since we can access the webServer directly, why con't we check the handlers (see the code snippet above).
          Http server will set the contextHandlers to Server.

          final String appDir = getWebAppsPath();
              ContextHandlerCollection contexts = new ContextHandlerCollection();
              webServer.setHandler(contexts);
          

          So, if any of the Filter init fails, the corresponding context will fail to start.

          Are you seeing any issue with this?

          With your proposed one, we need to create HTTPConnection explicitely and check.

          Thanks,
          Uma

          Show
          Uma Maheswara Rao G added a comment - Thanks Nicholas for taking a look! Did you see my first comment? Handler[] handlers = webServer.getHandlers(); for ( int i = 0; i < handlers.length; i++) { if ( false == handlers[i].isRunning()){ throw new IOException( "Problem starting http server" ); } } Since we can access the webServer directly, why con't we check the handlers (see the code snippet above). Http server will set the contextHandlers to Server. final String appDir = getWebAppsPath(); ContextHandlerCollection contexts = new ContextHandlerCollection(); webServer.setHandler(contexts); So, if any of the Filter init fails, the corresponding context will fail to start. Are you seeing any issue with this? With your proposed one, we need to create HTTPConnection explicitely and check. Thanks, Uma
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Hi Uma,

          According to Steve, the servlets may not get init'ed until the first HTTP request. In filter-init-exception-test.patch, we throw exception inside init(). Is the exception logged after http server started and right before the first http request? So it may be impossible to check the filter without a HTTP connection.

          Show
          Tsz Wo Nicholas Sze added a comment - Hi Uma, According to Steve, the servlets may not get init'ed until the first HTTP request. In filter-init-exception-test.patch, we throw exception inside init(). Is the exception logged after http server started and right before the first http request? So it may be impossible to check the filter without a HTTP connection.
          Hide
          Uma Maheswara Rao G added a comment -

          Hi Nicholas,
          Yes, that is correct with Servelet inits. Also we have the control parameter for load on start ups. But i dont see anywhere in the spec related to Fileters, they will have the same rule as servelet inits.

          As per my debug information, always filter init is getting invoked when it is loading the contexts. Since we are throwing the exception from init, context loading is failing. So,the application is not deployed propely. Whenever first request comes it is giving 503 service unavailable response.

          Am i missing something here?

          Show
          Uma Maheswara Rao G added a comment - Hi Nicholas, Yes, that is correct with Servelet inits. Also we have the control parameter for load on start ups. But i dont see anywhere in the spec related to Fileters, they will have the same rule as servelet inits. As per my debug information, always filter init is getting invoked when it is loading the contexts. Since we are throwing the exception from init, context loading is failing. So,the application is not deployed propely. Whenever first request comes it is giving 503 service unavailable response. Am i missing something here?
          Hide
          Tsz Wo Nicholas Sze added a comment -

          From the log of TestWebHdfsFileSystemContract below, some filters and Jersey resources are not initialized before http server startup.

          2011-11-02 11:58:24,397 INFO  namenode.NameNode (NameNodeHttpServer.java:run(182)) - NameNode Web-server up at: localhost/127.0.0.1:52819
          2011-11-02 11:58:24,397 INFO  ipc.Server (Server.java:run(649)) - IPC Server Responder: starting
          2011-11-02 11:58:24,397 INFO  ipc.Server (Server.java:run(480)) - IPC Server listener on 52818: starting
          2011-11-02 11:58:24,398 INFO  ipc.Server (Server.java:run(1487)) - IPC Server handler 0 on 52818: starting
          2011-11-02 11:58:24,398 INFO  ipc.Server (Server.java:run(1487)) - IPC Server handler 1 on 52818: starting
          2011-11-02 11:58:24,398 INFO  ipc.Server (Server.java:run(1487)) - IPC Server handler 2 on 52818: starting
          2011-11-02 11:58:24,399 INFO  namenode.NameNode (NameNode.java:initialize(344)) - NameNode up at: localhost/127.0.0.1:52818
          2011-11-02 11:58:24,400 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(883)) - Starting DataNode 0 with dfs.datanode.data.dir: file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/,file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/
          2011-11-02 11:58:24,407 INFO  security.UserGroupInformation (UserGroupInformation.java:initUGI(237)) - JAAS Configuration already set up for Hadoop, not re-installing.
          2011-11-02 11:58:24,439 WARN  util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
          2011-11-02 11:58:24,523 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(150)) - DataNode metrics system started (again)
          2011-11-02 11:58:24,530 INFO  datanode.DataNode (DataNode.java:initDataXceiver(723)) - Opened info server at 52820
          2011-11-02 11:58:24,532 INFO  datanode.DataNode (DataXceiverServer.java:<init>(77)) - Balancing bandwith is 1048576 bytes/s
          2011-11-02 11:58:24,535 INFO  http.HttpServer (HttpServer.java:addGlobalFilter(477)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
          2011-11-02 11:58:24,536 WARN  lib.StaticUserWebFilter (StaticUserWebFilter.java:getUsernameFromConf(141)) - dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
          2011-11-02 11:58:24,536 INFO  http.HttpServer (HttpServer.java:addFilter(455)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
          2011-11-02 11:58:24,537 INFO  http.HttpServer (HttpServer.java:addFilter(462)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
          2011-11-02 11:58:24,537 INFO  http.HttpServer (HttpServer.java:addFilter(462)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
          2011-11-02 11:58:24,541 INFO  http.HttpServer (HttpServer.java:addJerseyResourcePackage(382)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
          2011-11-02 11:58:24,542 INFO  http.HttpServer (HttpServer.java:start(647)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
          
          Show
          Tsz Wo Nicholas Sze added a comment - From the log of TestWebHdfsFileSystemContract below, some filters and Jersey resources are not initialized before http server startup. 2011-11-02 11:58:24,397 INFO namenode.NameNode (NameNodeHttpServer.java:run(182)) - NameNode Web-server up at: localhost/127.0.0.1:52819 2011-11-02 11:58:24,397 INFO ipc.Server (Server.java:run(649)) - IPC Server Responder: starting 2011-11-02 11:58:24,397 INFO ipc.Server (Server.java:run(480)) - IPC Server listener on 52818: starting 2011-11-02 11:58:24,398 INFO ipc.Server (Server.java:run(1487)) - IPC Server handler 0 on 52818: starting 2011-11-02 11:58:24,398 INFO ipc.Server (Server.java:run(1487)) - IPC Server handler 1 on 52818: starting 2011-11-02 11:58:24,398 INFO ipc.Server (Server.java:run(1487)) - IPC Server handler 2 on 52818: starting 2011-11-02 11:58:24,399 INFO namenode.NameNode (NameNode.java:initialize(344)) - NameNode up at: localhost/127.0.0.1:52818 2011-11-02 11:58:24,400 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(883)) - Starting DataNode 0 with dfs.datanode.data.dir: file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/,file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/ 2011-11-02 11:58:24,407 INFO security.UserGroupInformation (UserGroupInformation.java:initUGI(237)) - JAAS Configuration already set up for Hadoop, not re-installing. 2011-11-02 11:58:24,439 WARN util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2011-11-02 11:58:24,523 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(150)) - DataNode metrics system started (again) 2011-11-02 11:58:24,530 INFO datanode.DataNode (DataNode.java:initDataXceiver(723)) - Opened info server at 52820 2011-11-02 11:58:24,532 INFO datanode.DataNode (DataXceiverServer.java:<init>(77)) - Balancing bandwith is 1048576 bytes/s 2011-11-02 11:58:24,535 INFO http.HttpServer (HttpServer.java:addGlobalFilter(477)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 2011-11-02 11:58:24,536 WARN lib.StaticUserWebFilter (StaticUserWebFilter.java:getUsernameFromConf(141)) - dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user. 2011-11-02 11:58:24,536 INFO http.HttpServer (HttpServer.java:addFilter(455)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 2011-11-02 11:58:24,537 INFO http.HttpServer (HttpServer.java:addFilter(462)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2011-11-02 11:58:24,537 INFO http.HttpServer (HttpServer.java:addFilter(462)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2011-11-02 11:58:24,541 INFO http.HttpServer (HttpServer.java:addJerseyResourcePackage(382)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 2011-11-02 11:58:24,542 INFO http.HttpServer (HttpServer.java:start(647)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
          Hide
          Uma Maheswara Rao G added a comment -

          2011-11-04 10:55:04,403 INFO http.HttpServer (HttpServer.java:start(647)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
          2011-11-04 10:55:04,434 INFO http.HttpServer (HttpServer.java:start(652)) - listener.getLocalPort() returned 3356 webServer.getConnectors()[0].getLocalPort() returned 3356
          2011-11-04 10:55:04,434 INFO http.HttpServer (HttpServer.java:start(685)) - Jetty bound to port 3356
          2011-11-04 10:55:04,434 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
          2011-11-04 10:55:04,668 WARN mortbay.log (Slf4jLog.java:warn(76)) - failed recording: javax.servlet.ServletException
          2011-11-04 10:55:04,668 WARN mortbay.log (Slf4jLog.java:warn(89)) - Failed startup of context org.mortbay.jetty.webapp.WebAppContext@10045eb

          {/,file:/E:/Trunk-Common/hadoop-common-project/hadoop-common/target/test-classes/webapps/test}

          javax.servlet.ServletException
          at org.apache.hadoop.http.TestGlobalFilter$RecordingFilter.init(TestGlobalFilter.java:51)
          at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
          at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
          at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
          at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
          at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
          at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
          at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
          at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
          at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
          at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
          at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
          at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
          at org.mortbay.jetty.Server.doStart(Server.java:224)
          at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
          at org.apache.hadoop.http.HttpServer.start(HttpServer.java:686)
          at org.apache.hadoop.http.TestGlobalFilter.testServletFilter(TestGlobalFilter.java:108)
          .........
          ........
          2011-11-04 10:55:04,700 WARN mortbay.log (Slf4jLog.java:warn(76)) - failed recording: javax.servlet.ServletException
          2011-11-04 10:55:04,700 WARN mortbay.log (Slf4jLog.java:warn(76)) - failed org.mortbay.jetty.servlet.Context@1cdeff

          {/static,file:/E:/Trunk-Common/hadoop-common-project/hadoop-common/target/test-classes/webapps/static/}

          : javax.servlet.ServletException
          2011-11-04 10:55:04,700 WARN mortbay.log (Slf4jLog.java:warn(76)) - failed ContextHandlerCollection@6cb8: javax.servlet.ServletException
          2011-11-04 10:55:04,700 WARN mortbay.log (Slf4jLog.java:warn(89)) - Error starting handlers
          javax.servlet.ServletException
          at org.apache.hadoop.http.TestGlobalFilter$RecordingFilter.init(TestGlobalFilter.java:51)
          at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
          at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
          at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
          at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
          at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
          at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
          at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
          at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
          at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
          at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
          at org.mortbay.jetty.Server.doStart(Server.java:224)
          at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
          at org.apache.hadoop.http.HttpServer.start(HttpServer.java:686)
          at org.apache.hadoop.http.TestGlobalFilter.testServletFilter(TestGlobalFilter.java:108)
          ...........
          ..........
          2011-11-04 10:55:04,731 INFO mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@0.0.0.0:3356
          2011-11-04 10:55:04,731 WARN http.HttpServer (TestGlobalFilter.java:access(82)) - access http://localhost:3356/fsck
          2011-11-04 10:55:04,856 WARN http.HttpServer (TestGlobalFilter.java:access(96)) - urlstring=http://localhost:3356/fsck
          java.io.IOException: Server returned HTTP response code: 503 for URL: http://localhost:3356/fsck
          at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1241)
          at org.apache.hadoop.http.TestGlobalFilter.access(TestGlobalFilter.java:89)
          at org.apache.hadoop.http.TestGlobalFilter.testServletFilter(TestGlobalFilter.java:128)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          at java.lang.reflect.Method.invoke(Method.java:597)
          at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
          -----------------------------------------------------------------------------------
          ------------------------------------------------------------------------------------

          See the javadoc for ServeletHandler#initialize
          http://jetty.codehaus.org/jetty/jetty-6/apidocs/org/mortbay/jetty/servlet/ServletHandler.html#initialize()
          initialize
          public void initialize()
          throws ExceptionInitialize filters and load-on-startup servlets. Called automatically from start if autoInitializeServlet is true.
          Throws:
          Exception

          From your above comment,
          Whatever filters available in config with hadoop.http.filter.initializers it is adding the Filters and mappings.

          final FilterInitializer[] initializers = getFilterInitializers(conf); 
             if (initializers != null) {
               for(FilterInitializer c : initializers) {
                 c.initFilter(this, conf);
               }
           }

          So, while starting the contexts they are getting initialized.
          I am not sure we added all the filters by default. is there any other way to add lazily?

          Regarding to other proposal:

          How about pinging itself once the http server has started?

          Even if we assume all filters will not initialize at startup,
          I feel this may not help because we need to send the requests for all the urls which are mapping to Fileters to ensure all filters initialized properly. Creating connections with all the urls may not be the correct choice.
          and I think Connecting with all the urls wil be equals to setting servletHandlers.setAutoInitializeServlets(true); explicitely.

          From Spec: https://jira.sakaiproject.org/secure/attachment/16135/servlet-2_4-fr-spec.pdf
          SRV.6.2.1 Filter Lifecycle
          After deployment of the Web application, and before a request causes the container
          to access a Web resource, the container must locate the list of filters that must be
          applied to the Web resource as described below. The container must ensure that it
          has instantiated a filter of the appropriate class for each filter in the list, and called its
          init(FilterConfig config) method. The filter may throw an exception to indicate
          that it cannot function properly. If the exception is of type UnavailableException,
          the container may examine the isPermanent attribute of the exception and may
          choose to retry the filter at some later time.
          Only one instance per <filter> declaration in the deployment descriptor is
          instantiated per Java Virtual Machine (JVMTM) of the container. The container
          provides the filter config as declared in the filter’s deployment descriptor, the
          reference to the ServletContext for the Web application, and the set of
          initialization parameters.

          Thanks
          Uma

          Show
          Uma Maheswara Rao G added a comment - 2011-11-04 10:55:04,403 INFO http.HttpServer (HttpServer.java:start(647)) - Port returned by webServer.getConnectors() [0] .getLocalPort() before open() is -1. Opening the listener on 0 2011-11-04 10:55:04,434 INFO http.HttpServer (HttpServer.java:start(652)) - listener.getLocalPort() returned 3356 webServer.getConnectors() [0] .getLocalPort() returned 3356 2011-11-04 10:55:04,434 INFO http.HttpServer (HttpServer.java:start(685)) - Jetty bound to port 3356 2011-11-04 10:55:04,434 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26 2011-11-04 10:55:04,668 WARN mortbay.log (Slf4jLog.java:warn(76)) - failed recording: javax.servlet.ServletException 2011-11-04 10:55:04,668 WARN mortbay.log (Slf4jLog.java:warn(89)) - Failed startup of context org.mortbay.jetty.webapp.WebAppContext@10045eb {/,file:/E:/Trunk-Common/hadoop-common-project/hadoop-common/target/test-classes/webapps/test} javax.servlet.ServletException at org.apache.hadoop.http.TestGlobalFilter$RecordingFilter.init(TestGlobalFilter.java:51) at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.apache.hadoop.http.HttpServer.start(HttpServer.java:686) at org.apache.hadoop.http.TestGlobalFilter.testServletFilter(TestGlobalFilter.java:108) ......... ........ 2011-11-04 10:55:04,700 WARN mortbay.log (Slf4jLog.java:warn(76)) - failed recording: javax.servlet.ServletException 2011-11-04 10:55:04,700 WARN mortbay.log (Slf4jLog.java:warn(76)) - failed org.mortbay.jetty.servlet.Context@1cdeff {/static,file:/E:/Trunk-Common/hadoop-common-project/hadoop-common/target/test-classes/webapps/static/} : javax.servlet.ServletException 2011-11-04 10:55:04,700 WARN mortbay.log (Slf4jLog.java:warn(76)) - failed ContextHandlerCollection@6cb8: javax.servlet.ServletException 2011-11-04 10:55:04,700 WARN mortbay.log (Slf4jLog.java:warn(89)) - Error starting handlers javax.servlet.ServletException at org.apache.hadoop.http.TestGlobalFilter$RecordingFilter.init(TestGlobalFilter.java:51) at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.apache.hadoop.http.HttpServer.start(HttpServer.java:686) at org.apache.hadoop.http.TestGlobalFilter.testServletFilter(TestGlobalFilter.java:108) ........... .......... 2011-11-04 10:55:04,731 INFO mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@0.0.0.0:3356 2011-11-04 10:55:04,731 WARN http.HttpServer (TestGlobalFilter.java:access(82)) - access http://localhost:3356/fsck 2011-11-04 10:55:04,856 WARN http.HttpServer (TestGlobalFilter.java:access(96)) - urlstring= http://localhost:3356/fsck java.io.IOException: Server returned HTTP response code: 503 for URL: http://localhost:3356/fsck at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1241) at org.apache.hadoop.http.TestGlobalFilter.access(TestGlobalFilter.java:89) at org.apache.hadoop.http.TestGlobalFilter.testServletFilter(TestGlobalFilter.java:128) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) ----------------------------------------------------------------------------------- ------------------------------------------------------------------------------------ See the javadoc for ServeletHandler#initialize http://jetty.codehaus.org/jetty/jetty-6/apidocs/org/mortbay/jetty/servlet/ServletHandler.html#initialize( ) initialize public void initialize() throws ExceptionInitialize filters and load-on-startup servlets. Called automatically from start if autoInitializeServlet is true. Throws: Exception From your above comment, Whatever filters available in config with hadoop.http.filter.initializers it is adding the Filters and mappings. final FilterInitializer[] initializers = getFilterInitializers(conf); if (initializers != null ) { for (FilterInitializer c : initializers) { c.initFilter( this , conf); } } So, while starting the contexts they are getting initialized. I am not sure we added all the filters by default. is there any other way to add lazily? Regarding to other proposal: How about pinging itself once the http server has started? Even if we assume all filters will not initialize at startup, I feel this may not help because we need to send the requests for all the urls which are mapping to Fileters to ensure all filters initialized properly. Creating connections with all the urls may not be the correct choice. and I think Connecting with all the urls wil be equals to setting servletHandlers.setAutoInitializeServlets(true); explicitely. From Spec: https://jira.sakaiproject.org/secure/attachment/16135/servlet-2_4-fr-spec.pdf SRV.6.2.1 Filter Lifecycle After deployment of the Web application, and before a request causes the container to access a Web resource, the container must locate the list of filters that must be applied to the Web resource as described below. The container must ensure that it has instantiated a filter of the appropriate class for each filter in the list, and called its init(FilterConfig config) method. The filter may throw an exception to indicate that it cannot function properly. If the exception is of type UnavailableException, the container may examine the isPermanent attribute of the exception and may choose to retry the filter at some later time. Only one instance per <filter> declaration in the deployment descriptor is instantiated per Java Virtual Machine (JVMTM) of the container. The container provides the filter config as declared in the filter’s deployment descriptor, the reference to the ServletContext for the Web application, and the set of initialization parameters. Thanks Uma
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > From the log of TestWebHdfsFileSystemContract below, ...

          I misread the log: there are namenode and datanode web servers. So the filters were actually initialized before startup.

          Uma, your solution of checking handlers[i].isRunning() probably is good enough.

          Show
          Tsz Wo Nicholas Sze added a comment - > From the log of TestWebHdfsFileSystemContract below, ... I misread the log: there are namenode and datanode web servers. So the filters were actually initialized before startup. Uma, your solution of checking handlers [i] .isRunning() probably is good enough.
          Hide
          Uma Maheswara Rao G added a comment -

          Updated the patch for review!

          Show
          Uma Maheswara Rao G added a comment - Updated the patch for review!
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12502601/HADOOP-7688.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 7 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in .

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/361//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/361//artifact/trunk/hadoop-common-project/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/361//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12502601/HADOOP-7688.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 7 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/361//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/361//artifact/trunk/hadoop-common-project/patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/361//console This message is automatically generated.
          Hide
          Uma Maheswara Rao G added a comment -

          -1 findbugs. The patch appears to introduce 7 new Findbugs (version 1.3.9) warnings.

          findbugs are not related.

          Show
          Uma Maheswara Rao G added a comment - -1 findbugs. The patch appears to introduce 7 new Findbugs (version 1.3.9) warnings. findbugs are not related.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          +1 patch looks good.

          Show
          Tsz Wo Nicholas Sze added a comment - +1 patch looks good.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          I have committed this. Thanks, Uma!

          Show
          Tsz Wo Nicholas Sze added a comment - I have committed this. Thanks, Uma!
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #1327 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1327/)
          HADOOP-7688. Add servlet handler check in HttpServer.start(). Contributed by Uma Maheswara Rao G

          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198924
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #1327 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1327/ ) HADOOP-7688 . Add servlet handler check in HttpServer.start(). Contributed by Uma Maheswara Rao G szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198924 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #1253 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1253/)
          HADOOP-7688. Add servlet handler check in HttpServer.start(). Contributed by Uma Maheswara Rao G

          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198924
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #1253 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1253/ ) HADOOP-7688 . Add servlet handler check in HttpServer.start(). Contributed by Uma Maheswara Rao G szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198924 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #1275 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1275/)
          HADOOP-7688. Add servlet handler check in HttpServer.start(). Contributed by Uma Maheswara Rao G

          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198924
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #1275 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1275/ ) HADOOP-7688 . Add servlet handler check in HttpServer.start(). Contributed by Uma Maheswara Rao G szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198924 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #857 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/857/)
          HADOOP-7688. Add servlet handler check in HttpServer.start(). Contributed by Uma Maheswara Rao G

          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198924
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #857 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/857/ ) HADOOP-7688 . Add servlet handler check in HttpServer.start(). Contributed by Uma Maheswara Rao G szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198924 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #891 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/891/)
          HADOOP-7688. Add servlet handler check in HttpServer.start(). Contributed by Uma Maheswara Rao G

          szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198924
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #891 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/891/ ) HADOOP-7688 . Add servlet handler check in HttpServer.start(). Contributed by Uma Maheswara Rao G szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198924 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
          Hide
          Uma Maheswara Rao G added a comment -

          I am seeing this in Branch-2 with security.

          Silently It failed in KerberosAuthenticationHandler.init. And DFS started successFully. Now all the clients will get the 503 Service Unavailable. Also checkpointing and others stuffs will fail whcih are dealing with HTTPServer.

          In this patch we are throwing exception in Server start itself if any of the Filter#init fails.

          2012-06-29 11:44:24,156 WARN org.mortbay.log: Failed startup of context org.mortbay.jetty.webapp.WebAppContext@512d8ecd{/,file:/home/security/install/hadoop/namenode/share/hadoop/hdfs/webapps/hdfs}
          javax.servlet.ServletException: javax.security.auth.login.LoginException: Null Server Key 
          	at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:185)
          	at org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:146)
          	at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
          	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
          	at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
          	at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
          	at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
          	at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
          	at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
          	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
          	at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
          	at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
          	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
          	at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
          	at org.mortbay.jetty.Server.doStart(Server.java:224)
          	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
          	at org.apache.hadoop.http.HttpServer.start(HttpServer.java:617)
          	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:173)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:540)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:482)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:423)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:601)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:582)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1143)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1202)
          Caused by: javax.security.auth.login.LoginException: Null Server Key 
          	at com.sun.security.auth.module.Krb5LoginModule.commit(Krb5LoginModule.java:965)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          	at java.lang.reflect.Method.invoke(Method.java:597)
          	at javax.security.auth.login.LoginContext.invoke(LoginContext.java:769)
          	at javax.security.auth.login.LoginContext.access$000(LoginContext.java:186)
          	at javax.security.auth.login.LoginContext$5.run(LoginContext.java:706)
          	at java.security.AccessController.doPrivileged(Native Method)
          	at javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:703)
          	at javax.security.auth.login.LoginContext.login(LoginContext.java:576)
          	at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:169)
          	... 24 more
          
          Show
          Uma Maheswara Rao G added a comment - I am seeing this in Branch-2 with security. Silently It failed in KerberosAuthenticationHandler.init. And DFS started successFully. Now all the clients will get the 503 Service Unavailable. Also checkpointing and others stuffs will fail whcih are dealing with HTTPServer. In this patch we are throwing exception in Server start itself if any of the Filter#init fails. 2012-06-29 11:44:24,156 WARN org.mortbay.log: Failed startup of context org.mortbay.jetty.webapp.WebAppContext@512d8ecd{/,file:/home/security/install/hadoop/namenode/share/hadoop/hdfs/webapps/hdfs} javax.servlet.ServletException: javax.security.auth.login.LoginException: Null Server Key at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:185) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:146) at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.apache.hadoop.http.HttpServer.start(HttpServer.java:617) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:173) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:540) at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:482) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:423) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:601) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:582) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1143) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1202) Caused by: javax.security.auth.login.LoginException: Null Server Key at com.sun.security.auth.module.Krb5LoginModule.commit(Krb5LoginModule.java:965) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:769) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:186) at javax.security.auth.login.LoginContext$5.run(LoginContext.java:706) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:703) at javax.security.auth.login.LoginContext.login(LoginContext.java:576) at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:169) ... 24 more
          Hide
          Uma Maheswara Rao G added a comment -

          May be worth merging it to branch-2?

          Show
          Uma Maheswara Rao G added a comment - May be worth merging it to branch-2?
          Hide
          Todd Lipcon added a comment -

          Yes, let's merge this to branch-2 and branch-1 if applicable.

          Show
          Todd Lipcon added a comment - Yes, let's merge this to branch-2 and branch-1 if applicable.
          Hide
          Uma Maheswara Rao G added a comment -

          Ported to branch-2. Committed revision 1384416.

          Show
          Uma Maheswara Rao G added a comment - Ported to branch-2. Committed revision 1384416.
          Hide
          Uma Maheswara Rao G added a comment -

          here is the ported patch.

          Show
          Uma Maheswara Rao G added a comment - here is the ported patch.
          Hide
          Uma Maheswara Rao G added a comment -

          I have just committed into branch-1

          Show
          Uma Maheswara Rao G added a comment - I have just committed into branch-1

            People

            • Assignee:
              Uma Maheswara Rao G
              Reporter:
              Tsz Wo Nicholas Sze
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development