Details
-
Wish
-
Status: Closed
-
Major
-
Resolution: Won't Fix
-
1.0.10, 1.1.7, 2.0.0-M3
-
None
-
None
-
Cross-platform
Description
Currently, there is a large build-up of memory due to the fact that CompressionFilter (or morewhat jzlib) builds up memory structures for efficient compression over time.
The problem arises when using thousands of connections, as the CompressionFilter keeps compression data per IoSession, holding several hundreds of megabytes from being GCed.
Would help a lot if maximum amount of memory cached by ZStream could be configurable. Would rather have small memory footprint and a bit worse compression...
Example: I have 6000 connections to a proxy, using CompressionFilter between proxy and clients. When profiling with JProfiler, I find that 2GB of the heap in the proxy is used only by CompressionFilter (ZLib) objects. These objects are not GCed until connections close.
For now I have created a workaround that compresses each message within the encoder by using standard java.util.zip compression. But it is bad design, since it really is better to perform this using a filter externally from a decoder...