Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-10711

A low value in commit.interval.ms leads to unnecessary committing offsets

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 2.6.0
    • None
    • consumer, offset manager
    • None

    Description

      We want to avoid double delivery of the same records in Kafka. Therefore, we decided to set 

      commit.interval.ms=0 and max.poll.records=1

      . Because default commit.interval.ms= 5 sec. That's why, if the app crashed in runtime then after restarting the app will receive all uncommitted records for 5 sec. But when committing every record then the app will receive only 1 duplicated record.

      We are expecting that the consumer will poll(5 sec) a single record from the topic and after the next poll(5 sec), the consumer will commit offset of the record from the previous poll.

       

      But the consumer commits offsets without any delays even if offsets were already committed before. That's why such a high volume of commits overload Kafka Brokers.

      Could you please improve the behavior of consumers to avoid committing offsets that were already committed before and only commit offset if necessary?

      Attachments

        Activity

          People

            Unassigned Unassigned
            ruslan.hryn Ruslan Gryn
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: