Bug 44620 - infinit loop in nio connector code
Summary: infinit loop in nio connector code
Status: RESOLVED FIXED
Alias: None
Product: Tomcat 6
Classification: Unclassified
Component: Connectors (show other bugs)
Version: 6.0.16
Hardware: PC Linux
: P4 normal (vote)
Target Milestone: default
Assignee: Tomcat Developers Mailing List
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2008-03-17 08:53 UTC by tangy
Modified: 2008-04-15 10:07 UTC (History)
2 users (show)



Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description tangy 2008-03-17 08:53:41 UTC
The code below has a chance to cause tomcat enter dead loop in class InternalNioOutputBuffer 

    private synchronized void addToBB(byte[] buf, int offset, int length) throws IOException {
 -->       while (socket.getBufHandler().getWriteBuffer().remaining() < length) {
            flushBuffer();
        }

when the buffer size of socket is smaller than length. 
The default size of socket comes from socket.appWriteBufSize, which is 8192; The value of length is limited by maxHttpHeaderSize, which is 9000. Well, the chance for dead loop exists and happened.
It can be avoid if we config the two value correctly in server.xml
Comment 1 Christophe Pierret 2008-03-19 06:59:16 UTC
We had a similar issue that was fixed by applying the following patch:
http://svn.apache.org/viewvc?view=rev&revision=618420

Can you try this patch ?
Comment 2 Filip Hanik 2008-03-19 08:07:44 UTC
I will put in a check, and throw an exception if the system is misconfigured to detect this issue.
Comment 3 Filip Hanik 2008-03-19 09:06:18 UTC
I also tested the trunk patch, and couldn't get it to work properly either.
Comment 4 Filip Hanik 2008-03-19 09:06:36 UTC
ignore previous comment, wrong bug :)
Comment 5 tangy 2008-03-21 00:02:24 UTC
Sorry for that I do not express it clearly(We made some mistake in it). Since our application will cause the following problem:

2008-3-21 14:54:12 org.apache.catalina.connector.CoyoteAdapter service
严重: An exception or error occurred in the container during the request processing
java.lang.ArrayIndexOutOfBoundsException: 8192
	at org.apache.coyote.http11.InternalNioOutputBuffer.write(InternalNioOutputBuffer.java:734)
	at org.apache.coyote.http11.InternalNioOutputBuffer.write(InternalNioOutputBuffer.java:641)
	at org.apache.coyote.http11.InternalNioOutputBuffer.sendHeader(InternalNioOutputBuffer.java:507)
	at org.apache.coyote.http11.Http11NioProcessor.prepareResponse(Http11NioProcessor.java:1707)
	at org.apache.coyote.http11.Http11NioProcessor.action(Http11NioProcessor.java:1023)
	at org.apache.coyote.Response.action(Response.java:183)
	at org.apache.coyote.Response.sendHeaders(Response.java:379)
	at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:305)
	at org.apache.catalina.connector.OutputBuffer.close(OutputBuffer.java:273)
	at org.apache.catalina.connector.Response.finishResponse(Response.java:492)
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:310)
	at org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:879)
	at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:719)
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2080)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
	at java.lang.Thread.run(Thread.java:619)
2008-3-21 14:54:12 org.apache.coyote.http11.Http11NioProcessor endRequest
严重: Error finishing response
java.lang.ArrayIndexOutOfBoundsException
	at java.lang.System.arraycopy(Native Method)
	at org.apache.coyote.http11.InternalNioOutputBuffer.write(InternalNioOutputBuffer.java:703)
	at org.apache.coyote.http11.InternalNioOutputBuffer.sendStatus(InternalNioOutputBuffer.java:460)
	at org.apache.coyote.http11.Http11NioProcessor.prepareResponse(Http11NioProcessor.java:1696)
	at org.apache.coyote.http11.Http11NioProcessor.action(Http11NioProcessor.java:1023)
	at org.apache.coyote.Response.action(Response.java:181)
	at org.apache.coyote.http11.InternalNioOutputBuffer.endRequest(InternalNioOutputBuffer.java:382)
	at org.apache.coyote.http11.Http11NioProcessor.endRequest(Http11NioProcessor.java:977)
	at org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:913)
	at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:719)
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2080)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
	at java.lang.Thread.run(Thread.java:619)
we changed the server.xml and set maxHttpHeaderSize="1024" , and which cause dead loop and cost 100%CPU.  the detailed stack trace is below(tomcat 6.0.16):
Daemon Thread [catalina-exec-4] (Suspended)	
	InternalNioOutputBuffer.flushBuffer() line: 768	
	InternalNioOutputBuffer.addToBB(byte[], int, int) line: 616	
	InternalNioOutputBuffer.commit() line: 608	
	Http11NioProcessor.action(ActionCode, Object) line: 1024	
	Response.action(ActionCode, Object) line: 183	
	Response.sendHeaders() line: 379	
	OutputBuffer.doFlush(boolean) line: 305	
	OutputBuffer.close() line: 273	
	Response.finishResponse() line: 492	
	CoyoteAdapter.service(Request, Response) line: 310	
	Http11NioProcessor.process(NioChannel) line: 879	
	Http11NioProtocol$Http11ConnectionHandler.process(NioChannel) line: 719	
	NioEndpoint$SocketProcessor.run() line: 2080	
	ThreadPoolExecutor$Worker.runTask(Runnable) line: 885	
	ThreadPoolExecutor$Worker.run() line: 907	
	Thread.run() line: 619	

At  last we also increased the  socket.appWriteBufSize="10240" and solved the problem.
Comment 6 tangy 2008-03-21 00:04:36 UTC
sorry for typed mistake:
   we changed the server.xml and set maxHttpHeaderSize="1024" , 
should be 
   we changed the server.xml and set maxHttpHeaderSize="10240" , 
Comment 7 Mark Thomas 2008-03-24 13:53:02 UTC
The patch referred to in comment #2 is insufficient to fix this issue. I have committed a fix to trunk and proposed it for 6.0.x.
Comment 8 Mark Thomas 2008-04-15 10:07:11 UTC
The fix for this will be in 6.0.17 onwards.