Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Incomplete
-
2.1.4
-
None
-
None
-
Operating System: Linux
Platform: Other
-
31116
Description
Hi
We have a cocoon instance running in production serving an application. When
search engines or a large number of users hit the site we see the number of open
file handles rising. Most of the time this is Ok, however sometimes the whole
JVM crashes with a Too many file handles open exceptiom. The only reason I have
marked this as critical, it that when the JVM goes down, all the users start
complaining, and this is a production system.
The Operations guys are saying there is a potential Denial Of Service problem as
a result
We have done some investigation with a very simple pipeline, that generates from
an xml file, and transforms (xslt) doing a document() XPath lookup into annother
file, then serialises the result. We would have expected to see 1 file handle
per file, per thead, ie 2 per thread. We ran the load test with 50 threads and
no delays.
Observations.
1. Using lsof on the Jvm process group for Tomcat we see between 2000 and 4000
filehandles open to the files referenced in the pipeline.
2. The number open appears to follow garbage collection cycles, ie drops when
garbage collection is performed.
3. When the load is taken off, the total soon returns to 0.
Inference.
1. It looks like the file handles are not being released (at the OS level) until
garbage collection takes place.
2. If this is true is there any way of ensuring earlier release ?
Things we dont want to do (partially because we dont have to do them with the
other non Cocoon applications in production)
1. Vastly increase the File Handle limits on the machine.
2. Reduce the size of the JVM heap to ensure there can never be too many File
objects in existance. (currently ~ 1.5GB)
3. perform agressive garbage collection.
Is this an issue for cocoon ? Or is it a wider problem with Java, GC and OS
resources on heavilly loaded systems ?
We have a cocoon instance running in production serving an application. When
search engines or a large number of users hit the site we see the number of open
file handles rising. Most of the time this is Ok, however sometimes the whole
JVM crashes with a Too many file handles open exceptiom. The only reason I have
marked this as critical, it that when the JVM goes down, all the users start
complaining, and this is a production system.
The Operations guys are saying there is a potential Denial Of Service problem as
a result
We have done some investigation with a very simple pipeline, that generates from
an xml file, and transforms (xslt) doing a document() XPath lookup into annother
file, then serialises the result. We would have expected to see 1 file handle
per file, per thead, ie 2 per thread. We ran the load test with 50 threads and
no delays.
Observations.
1. Using lsof on the Jvm process group for Tomcat we see between 2000 and 4000
filehandles open to the files referenced in the pipeline.
2. The number open appears to follow garbage collection cycles, ie drops when
garbage collection is performed.
3. When the load is taken off, the total soon returns to 0.
Inference.
1. It looks like the file handles are not being released (at the OS level) until
garbage collection takes place.
2. If this is true is there any way of ensuring earlier release ?
Things we dont want to do (partially because we dont have to do them with the
other non Cocoon applications in production)
1. Vastly increase the File Handle limits on the machine.
2. Reduce the size of the JVM heap to ensure there can never be too many File
objects in existance. (currently ~ 1.5GB)
3. perform agressive garbage collection.
Is this an issue for cocoon ? Or is it a wider problem with Java, GC and OS
resources on heavilly loaded systems ?