Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-5321

MemoryRecords.filterTo can return corrupt data if output buffer is not large enough

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Blocker
    • Resolution: Fixed
    • None
    • 0.11.0.0
    • log
    • None

    Description

      Due to KAFKA-5316, it is possible for a record set to grow during cleaning and overflow the output buffer allocated for writing. When we reach the record set which is doomed to overflow the buffer, there are two possibilities:

      1. No records were removed and the original entry is directly appended to the log. This results in the overflow reported in KAFKA-5316.
      2. Records were removed and a new record set is built.

      Here we are concerned with the latter case.The problem is that the builder code automatically allocates a new buffer when we reach the end of the existing buffer and does not reset the position in the original buffer. Since MemoryRecords.filterTo continues using the old buffer, this can lead to data corruption after cleaning (the data left in the overflowed buffer is garbage).

      Note that this issue could get fixed as part of a general solution KAFKA-5316, but if that seems too risk, we might fix this separately. A simple solution is to make both paths consistent and ensure that we raise an exception.

      Attachments

        Activity

          People

            hachikuji Jason Gustafson
            hachikuji Jason Gustafson
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: