Affects Version/s: 1.1.0, 1.1.1
Fix Version/s: None
We are testing secured writing to kafka through ssl. Testing at small scale, ssl writing to kafka was fine. However, when we enabled ssl writing at a larger scale (>40k clients write concurrently), the kafka brokers soon hit OutOfMemory issue with 4G memory setting. We have tried with increasing the heap size to 10Gb, but encountered the same issue.
We took a few heap dumps , and found that most of the heap memory is referenced through org.apache.kafka.common.network.Selector objects. There are two Channel maps field in Selector. It seems that somehow the objects is not deleted from the map in a timely manner.
One observation is that the memory leak seems relate to kafka partition leader changes. If there is broker restart etc. in the cluster that caused partition leadership change, the brokers may hit the OOM issue faster.
Please see the attached images and the following link for sample gc analysis.
the command line for running kafka:
We use java 1.8.0_102, and has applied a TLS patch on reducing X509Factory.certCache map size from 750 to 20.