Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Cannot Reproduce
-
1.1.0
-
None
-
None
Description
Hi. I am observing indefinite memory growth of my kafka-streams application. It gets killed by the OS when reaching the memory limit (10gb).
It's running two unrelated pipelines (read from 4 source topics - 100 partitions each - aggregate data and write to two destination topics)
My environment:
- Kubernetes cluster
- 4 app instances
- 10GB memory limit per pod (instance)
- JRE 8
JVM / Streams app:
- -Xms2g
- -Xmx4g
- num.stream.threads = 4
- commit.interval.ms = 1000
- linger.ms = 1000
When my app is running for 24hours it reaches 10GB memory limit. Heap and GC looks good, non-heap avg memory usage is 120MB. I've read it might be related to the RocksDB that works underneath streams app, however I tried to tune it using confluent doc unfortunately with no luck.
RocksDB config #1:
tableConfig.setBlockCacheSize(16 * 1024 * 1024L);
tableConfig.setBlockSize(16 * 1024L);
tableConfig.setCacheIndexAndFilterBlocks(true);
options.setTableFormatConfig(tableConfig);
options.setMaxWriteBufferNumber(2);
RocksDB config #2
tableConfig.setBlockCacheSize(1024 * 1024L);
tableConfig.setBlockSize(16 * 1024L);
tableConfig.setCacheIndexAndFilterBlocks(true);
options.setTableFormatConfig(tableConfig);
options.setMaxWriteBufferNumber(2);
options.setWriteBufferSize(8 * 1024L);
This behavior has only been observed with our production traffic, where per topic input message rate is 10msg/sec and is pretty much constant (no peaks). I am attaching cluster resources usage from last 24h.
Any help or advice would be much appreciated.