Details
-
Bug
-
Status: Resolved
-
Blocker
-
Resolution: Not A Problem
-
None
-
None
-
None
Description
Stream-stream joins use the regular `WindowStore` implementation but with `retainDuplicates` set to true. To allow for duplicates while using the same unique-key underlying stores we just wrap the key with an incrementing sequence number before inserting it.
This wrapping occurs at the innermost layer of the store hierarchy, which means the duplicates must first pass through the changelogging layer. At this point the keys are still identical. So, we end up sending the records to the changelog without distinct keys and therefore may lose the older of the duplicates during compaction.
Attachments
Issue Links
- duplicates
-
KAFKA-5804 ChangeLoggingWindowBytesStore needs to retain duplicates when writing to the log
- Resolved
- is related to
-
KAFKA-9921 Caching is not working properly with WindowStateStore when retaining duplicates
- Resolved