Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-13831

Kafka retention can use old value of retention.ms

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 2.8.0
    • None
    • core
    • None

    Description

      Hi,
      I think I have found a bug in Kafka retention.
      I'm using Confluent Platform 6.2.2 (Kafka 2.8.0).
      I changed retention.ms for topic twice:
      1. From 432000000ms to 180000ms (to clean the topic)
      2. Back to 432000000ms.

      After second change retention thread is still using 180000ms value.
      Only broker restart fixes this issue.

      Logs:

      server.log.2022-04-15-03:[2022-04-15 03:29:08,445] INFO [Log partition=pm.hwe.lte.lcell.inc.intrarat.ho.x2.raw-0, dir=/data/kafka] Deleting segment LogSegment(baseOffset=1029819055, size=22996644, lastModifiedTime=1650007299179, largestRecordTimestamp=Some(1650007299178)) due to retention time 180000ms breach based on the largest record timestamp in the segment (kafka.log.Log)
      

      Topic description:

      kafka-topics --bootstrap-server localhost:9092 --describe --topic pm.hwe.lte.lcell.inc.intrarat.ho.x2.raw
      Topic: pm.hwe.lte.lcell.inc.intrarat.ho.x2.raw TopicId: svLdGbOaRXmdkHGsdlaPUQ PartitionCount: 1 ReplicationFactor: 3 Configs: min.insync.replicas=2,segment.bytes=1073741824,retention.ms=432000000,segment.ms=86400000
      

      EDIT: the scale of this problem is around 0,1% of topics where I did a change (a few topics from a few thousands)

      Attachments

        Activity

          People

            Unassigned Unassigned
            maver1ck Maciej BryƄski
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: