Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-13152

Replace "buffered.records.per.partition" & "cache.max.bytes.buffering" with "{statestore.cache}/{input.buffer}.max.bytes"

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Reopened
    • Major
    • Resolution: Unresolved
    • None
    • 4.0.0
    • streams

    Description

      The current config "buffered.records.per.partition" controls how many records in maximum to bookkeep, and hence it is exceed we would pause fetching from this partition. However this config has two issues:

      • It's a per-partition config, so the total memory consumed is dependent on the dynamic number of partitions assigned.
      • Record size could vary from case to case.

      And hence it's hard to bound the memory usage for this buffering. We should consider deprecating that config with a global, e.g. "input.buffer.max.bytes" which controls how much bytes in total is allowed to be buffered. This is doable since we buffer the raw records in <byte[], byte[]>.

      Attachments

        Issue Links

          Activity

            People

              sagarrao Sagar Rao
              guozhang Guozhang Wang
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated: