Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-8428

Cleanup LogValidator#validateMessagesAndAssignOffsetsCompressed to assume single record batch only

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 2.3.0
    • None
    • None

    Description

      Today, the client -> server record batching protocol works like this:

      1. With magic v2, we always require a single batch within compressed set. And inside the LogValidator#validateMessagesAndAssignOffsetsCompressed we assume so already.

      2. With magic v1, our code actually also assumes one record batch, since whenever inPlaceAssignment is true we assume one batch only; however with magic v1 it is still possible that inPlaceAssignment == true.

      3. With magic v0, our code does handle the case with multiple record batch, since with v0 inPlaceAssignment is always false.

      This makes the logic of LogValidator#validateMessagesAndAssignOffsetsCompressed quite twisted and complicated.

      Since all standard clients implementation we've known so far actually all wrap a single batch with compressed (of course, we cannot guarantee this is the case for all clients in the wild, but I think the chance of multiple batches with compressed records should really be rare), I think it's better just to make it as a universal requirement for all versions.

      Attachments

        Issue Links

          Activity

            People

              guozhang Guozhang Wang
              guozhang Guozhang Wang
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: