Kafka
  1. Kafka
  2. KAFKA-546

Fix commit() in zk consumer for compressed messages

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.8.0
    • Fix Version/s: 0.8.0
    • Component/s: None
    • Labels:
      None

      Description

      In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.

      In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.

      This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

      1. kafka-546-v1.patch
        8 kB
        Swapnil Ghike
      2. kafka-546-v2.patch
        8 kB
        Swapnil Ghike
      3. kafka-546-v3.patch
        10 kB
        Swapnil Ghike
      4. kafka-546-v4.patch
        10 kB
        Swapnil Ghike
      5. kafka-546-v5.patch
        10 kB
        Swapnil Ghike

        Activity

        Hide
        Swapnil Ghike added a comment - - edited

        1. ConsumerIterator skips messages that have already been fetched.

        2. consumer.PartitionTopicInfo.enqueue()

        • Made a change to pass the starting offset of a messageSet instead of the current fetchedOffset to FetchedDataChunk(). The current code expects the fetchedOffset to be the same as the starting offset of the incoming messageSet. But if a messageSet was partially consumed and fetched again, the fetchedOffset that goes into the FetchedDataChunk will be greater than the starting offset in the incoming messageSet. The fix takes care of this situation. The fix also does not disturb consumption under normal sequential fetches, because in this situation the starting offset of incoming messageSet will actually be the same as the fetchedOffset recorded in partitionTopicInfo.

        3. Added a unit test to test de-deduplication of messages in ConsumerIterator.

        Show
        Swapnil Ghike added a comment - - edited 1. ConsumerIterator skips messages that have already been fetched. 2. consumer.PartitionTopicInfo.enqueue() Made a change to pass the starting offset of a messageSet instead of the current fetchedOffset to FetchedDataChunk(). The current code expects the fetchedOffset to be the same as the starting offset of the incoming messageSet. But if a messageSet was partially consumed and fetched again, the fetchedOffset that goes into the FetchedDataChunk will be greater than the starting offset in the incoming messageSet. The fix takes care of this situation. The fix also does not disturb consumption under normal sequential fetches, because in this situation the starting offset of incoming messageSet will actually be the same as the fetchedOffset recorded in partitionTopicInfo. 3. Added a unit test to test de-deduplication of messages in ConsumerIterator.
        Hide
        Jun Rao added a comment -

        Can't see to apply the patch cleanly to 0.8. Could you rebase?

        $ patch -p0 < ~/Downloads/kafka-546-v1.patch
        patching file core/src/test/scala/unit/kafka/consumer/ConsumerIteratorTest.scala
        patching file core/src/main/scala/kafka/message/ByteBufferMessageSet.scala
        Reversed (or previously applied) patch detected! Assume -R? [n] ^C

        Show
        Jun Rao added a comment - Can't see to apply the patch cleanly to 0.8. Could you rebase? $ patch -p0 < ~/Downloads/kafka-546-v1.patch patching file core/src/test/scala/unit/kafka/consumer/ConsumerIteratorTest.scala patching file core/src/main/scala/kafka/message/ByteBufferMessageSet.scala Reversed (or previously applied) patch detected! Assume -R? [n] ^C
        Hide
        Swapnil Ghike added a comment -

        Rebased.

        Show
        Swapnil Ghike added a comment - Rebased.
        Hide
        Jun Rao added a comment -

        Thanks for patch v2. Some comments:

        20. ConsumerIterator.next(): The following code depends on no gaps in offsets. This is true at this moment, but may not be true in the future when we have a different retention policy. A safer way is to keep iterating the messageSet until we get an offset that reaches or passes ctiConsumeOffset.
        for (i <- 0L until (ctiConsumeOffset - cdcFetchOffset)) {
        localCurrent.next()

        21. PartitionTopicInfo: In startOffset(), unfortunately, we can't use the shallow iterator. This is because when messages are compressed, the offset of the top level message has the offset of the last message (instead of the first one) in the compressed unit. Also, iterating messages here may not be ideal since it forces us to decompress. An alternative way is to do the logic in ConsumerIterator.next(). Everytime that we get a new chunk of messageset, we keep iterating it until the message offset reaches or passes the consumeroffset. This way, if we are doing shallow iteration, we don't have to decompress messages.

        22. ConsumerIteratorTest:
        22.1 zkConsumerConnector is not used.
        22.2 We probably should set consumerOffset to a value >0 in PartitionTopicInfo.
        22.3 Also, could we add a test that covers compressed messageset?

        Show
        Jun Rao added a comment - Thanks for patch v2. Some comments: 20. ConsumerIterator.next(): The following code depends on no gaps in offsets. This is true at this moment, but may not be true in the future when we have a different retention policy. A safer way is to keep iterating the messageSet until we get an offset that reaches or passes ctiConsumeOffset. for (i <- 0L until (ctiConsumeOffset - cdcFetchOffset)) { localCurrent.next() 21. PartitionTopicInfo: In startOffset(), unfortunately, we can't use the shallow iterator. This is because when messages are compressed, the offset of the top level message has the offset of the last message (instead of the first one) in the compressed unit. Also, iterating messages here may not be ideal since it forces us to decompress. An alternative way is to do the logic in ConsumerIterator.next(). Everytime that we get a new chunk of messageset, we keep iterating it until the message offset reaches or passes the consumeroffset. This way, if we are doing shallow iteration, we don't have to decompress messages. 22. ConsumerIteratorTest: 22.1 zkConsumerConnector is not used. 22.2 We probably should set consumerOffset to a value >0 in PartitionTopicInfo. 22.3 Also, could we add a test that covers compressed messageset?
        Hide
        Swapnil Ghike added a comment -

        Thanks for the comments. The fixes are as follows:

        20. Changed ConsumerIterator to iterator until it reaches or passes ctiConsumerOffset.

        21. Reverted this change because as discussed:
        i. On commit of a part of a compressed message, the fetchOffset that will be checkpointed will be the actual fetchOffset, and not the offset of the last message in the compressed message set.
        ii. We need to keep fetchOffset to make sure that ShallowIterator works fine under normal conditions.

        22.1 Remove zkConsumerConnector.
        22.2 The comments in the test case should be helpful in this regard.
        22.3 Changed the test to use deep iterator over compressed message set.

        Other random changes:
        1. Imports optimized over changes that were pulled in via rebase.
        2. I had missed removing the calls to toInt() at a couple places in KAFKA-556 for FileMessageSet.sizeInBytes(). Fixing this.

        Show
        Swapnil Ghike added a comment - Thanks for the comments. The fixes are as follows: 20. Changed ConsumerIterator to iterator until it reaches or passes ctiConsumerOffset. 21. Reverted this change because as discussed: i. On commit of a part of a compressed message, the fetchOffset that will be checkpointed will be the actual fetchOffset, and not the offset of the last message in the compressed message set. ii. We need to keep fetchOffset to make sure that ShallowIterator works fine under normal conditions. 22.1 Remove zkConsumerConnector. 22.2 The comments in the test case should be helpful in this regard. 22.3 Changed the test to use deep iterator over compressed message set. Other random changes: 1. Imports optimized over changes that were pulled in via rebase. 2. I had missed removing the calls to toInt() at a couple places in KAFKA-556 for FileMessageSet.sizeInBytes(). Fixing this.
        Hide
        Jun Rao added a comment -

        Thanks for patch v3. A couple of more comments:
        30. ConsumerIterator.next(): In the following code, to be safe, we need to check if localCurrent hasNext in the while loop.
        // reject the messages that have already been consumed
        while (item.offset < currentTopicInfo.getConsumeOffset) {
        item = localCurrent.next()

        31. ConsumerIteratorTest: Not sure if the test really does what it intends to. To simulate reading from the middle of a compressed messageset, we need to put in a consume offset larger than 0 in PartitionTopicInfo, right?

        Show
        Jun Rao added a comment - Thanks for patch v3. A couple of more comments: 30. ConsumerIterator.next(): In the following code, to be safe, we need to check if localCurrent hasNext in the while loop. // reject the messages that have already been consumed while (item.offset < currentTopicInfo.getConsumeOffset) { item = localCurrent.next() 31. ConsumerIteratorTest: Not sure if the test really does what it intends to. To simulate reading from the middle of a compressed messageset, we need to put in a consume offset larger than 0 in PartitionTopicInfo, right?
        Hide
        Swapnil Ghike added a comment -

        30. Fixed.

        31. The test in v3 patch would've worked too, but changed it for clarity in this patch.

        Show
        Swapnil Ghike added a comment - 30. Fixed. 31. The test in v3 patch would've worked too, but changed it for clarity in this patch.
        Hide
        Swapnil Ghike added a comment -

        A small addition to the unit test to make sure that the iterator does not have any extra elements.

        Show
        Swapnil Ghike added a comment - A small addition to the unit test to make sure that the iterator does not have any extra elements.
        Hide
        Jun Rao added a comment -

        Thanks for patch v5. +1. Committed to 0.8.

        Show
        Jun Rao added a comment - Thanks for patch v5. +1. Committed to 0.8.

          People

          • Assignee:
            Swapnil Ghike
            Reporter:
            Jay Kreps
          • Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development