Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
1.1.0, 1.1.1, 2.0.0
-
None
-
None
Description
Hi, We are using Kafka Streams to process a compacted store, when resetting the application/processing from scratch the default topic configuration for repartition topics is 50MB and 10min segment sizes.
As the retention.ms is undefined, this leads to default retention.ms and log cleaner starts competing with the application, effectively causing the streams app to skip records.
Application logs the following:
Fetch offset 213792 is out of range for partition app-id-KTABLE-AGGREGATE-STATE-STORE-0000000015-repartition-7, resetting offset
Fetch offset 110227 is out of range for partition app-id-KTABLE-AGGREGATE-STATE-STORE-0000000015-repartition-2, resetting offset
Resetting offset for partition app-id-KTABLE-AGGREGATE-STATE-STORE-0000000015-repartition-7 to offset 233302.
Resetting offset for partition app-id-KTABLE-AGGREGATE-STATE-STORE-0000000015-repartition-2 to offset 119914.
By adding the following configuration to RepartitionTopicConfig.java the issue is solved
tempTopicDefaultOverrides.put(TopicConfig.RETENTION_MS_CONFIG, "-1"); // Infinite
My understanding is that this should be safe as KafkaStreams uses the admin API to delete segments.
Attachments
Attachments
Issue Links
- duplicates
-
KAFKA-6535 Set default retention ms for Streams repartition topics to Long.MAX_VALUE
- Resolved