Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-10471

TimeIndex handling may cause data loss in certain back to back failure

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • None
    • 2.8.0
    • core, log
    • None

    Description

      1. Active segment for log A going clean shutdown - trim the time index to the latest fill value, set the clean shutdown marker.
      2. Broker restarts, loading logs - no recovery due to clean shutdown marker, log A recovers with the previous active segment as current. It also resized the TimeIndex to the max.
      3.  Before all the log loads, the broker had a hard shutdown causing a clean shutdown marker left as is.
      4.  Broker restarts, log A skips recovery due to the presence of a clean shutdown marker but the TimeIndex file assumes the resized file from the previous instance is all full (it assumes either file is newly created or is full with valid value).
      5. The first append to the active segment will result in roll and TimeIndex will be rolled with the timestamp value of the last valid entry (0)
      6. Segment's largest timestamp gives 0 (this can cause premature deletion of data due to retention.

      Attachments

        Issue Links

          Activity

            People

              ramanverma Raman Verma
              rshekhar Rohit Shekhar
              Jun Rao Jun Rao
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: