We had a transaction system test fail with the following error:
After investigation, we found the duplicates were a result of the consumer reading an aborted transaction, which should not be possible with the read_committed isolation level.
We tracked down the fetch request which returned the aborted data:
After correlating with the contents of the log segment 00000000000000045694.log, we found that this fetch response included data which was above the returned LSO which is 50646. In fact, the high watermark matched the LSO in this case, so the data was above the high watermark as well.
At the same time this request was received, we noted that the high watermark was updated:
The position of the new high watermark matched the end position from the fetch response, so that led us to believe there was a race condition with the updating of this value. In the code, we have the following (abridged) logic for fetching the LSO:
If the first unstable offset is less than the high watermark, we should use that; otherwise we use the high watermark. The problem is that the high watermark referenced here could be updated between the range check and the call to `fetchHighWatermarkMetadata`. If that happens, we would end up reading data which is above the first unstable offset.
The solution to fix this problem is to cache the high watermark value so that it is used in both places. We may consider some additional improvements here as well, such as fixing the inconsistency problem in the fetch response which included data above the returned high watermark. We may also consider having the client react more defensively by ignoring fetched data above the high watermark. This would fix this problem for newer clients talking to older brokers which might hit this problem.