Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-8522

Tombstones can survive forever

Agile BoardAttach filesAttach ScreenshotVotersStop watchingWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • None
    • 3.1.0
    • log cleaner
    • None

    Description

      This is a bit grey zone as to whether it's a "bug" but it is certainly unintended behaviour.

       

      Under specific conditions tombstones effectively survive forever:

      • Small amount of throughput;
      • min.cleanable.dirty.ratio near or at 0; and
      • Other parameters at default.

      What  happens is all the data continuously gets cycled into the oldest segment. Old records get compacted away, but the new records continuously update the timestamp of the oldest segment reseting the countdown for deleting tombstones.

      So tombstones build up in the oldest segment forever.

       

      While you could "fix" this by reducing the segment size, this can be undesirable as a sudden change in throughput could cause a dangerous number of segments to be created.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Yohan123 Richard Yu
            EeveeB Evelyn Bayes
            Votes:
            4 Vote for this issue
            Watchers:
            16 Stop watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Issue deployment