Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-12559

Add a top-level Streams config for bounding off-heap memory



    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • streams


      At the moment we provide an example of how to bound the memory usage of rocskdb in the Memory Management section of the docs. This requires implementing a custom RocksDBConfigSetter class and setting a number of rocksdb options for relatively advanced concepts and configurations. It seems a fair number of users either fail to find this or consider it to be for more advanced use cases/users. But RocksDB can eat up a lot of off-heap memory and it's not uncommon for users to come across a RocksDBException: Cannot allocate memory

      It would probably be a much better user experience if we implemented this memory bound out-of-the-box and just gave users a top-level StreamsConfig to tune the off-heap memory given to rocksdb, like we have for on-heap cache memory with cache.max.bytes.buffering. More advanced users can continue to fine-tune their memory bounding and apply other configs with a custom config setter, while new or more casual users can cap on the off-heap memory without getting their hands dirty with rocksdb.

      I would propose to add the following top-level config:

      rocksdb.max.bytes.off.heap: medium priority, default to -1 (unbounded), valid values are [0, inf]

      I'd also want to consider adding a second, lower priority top-level config to give users a knob for adjusting how much of that total off-heap memory goes to the block cache + index/filter blocks, and how much of it is afforded to the write buffers. I'm struggling to come up with a good name for this config, but it would be something like

      rocksdb.memtable.to.block.cache.off.heap.memory.ratio: low priority, default to 0.5, valid values are [0, 1]




            adityau Aditya Upadhyaya
            ableegoldman A. Sophie Blee-Goldman
            0 Vote for this issue
            9 Start watching this issue