Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-6264

Log cleaner thread may die on legacy segment containing messages whose offsets are too large

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • 0.10.2.1, 0.11.0.2, 1.0.0
    • 2.0.0
    • core
    • None

    Description

      We encountered a problem that some of the legacy log segments contains messages whose offsets are larger than SegmentBaseOffset + Int.MaxValue.

      Prior to 0.10.2.0, we do not assert the offset of the messages when appending them to the log segments. Due to KAFKA-5413, the log cleaner may append messages whose offset is greater than base_offset + Int.MaxValue into the segment during the log compaction.

      After the brokers are upgraded, those log segments cannot be compacted anymore because the compaction will fail immediately due to the offset range assertion we added to the LogSegment.

      We have seen this issue in the __consumer_offsets topic so it could be a general problem. There is no easy solution for the users to recover from this case.

      One solution is to split such log segments in the log cleaner once it sees a message with problematic offset and append those messages to a separate log segment with a larger base_offset.

      Due to the impact of the issue. We may want to consider backporting the fix to previous affected versions.

      Attachments

        Issue Links

          Activity

            People

              dhruvilshah Dhruvil Shah
              becket_qin Jiangjie Qin
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: