Details
-
Bug
-
Status: Open
-
Critical
-
Resolution: Unresolved
-
1.0.0
-
None
-
None
-
production
-
Important
Description
Hi
We are observing that log segments are getting deleted prematurely even before largest message timestamp in segment has not yet reached its retention period.
We are using broker version kafka_2.11-1.0.0.
Looking at timeindex file , I see that it is getting append with "timestamp: 0" entries with offset= start offset of segment.
Example :
File : 00000000000000047730.timeindex
File : 00000000000000047730.timeindex
timestamp: 1565565117007 offset: 47799
timestamp: 1565565117037 offset: 47846
timestamp: 1565565117087 offset: 47917
...
timestamp: 1565565118742 offset: 50607
timestamp: 0 offset: 47730
timestamp: 0 offset: 47730
Last message published to this segment was at 1565565118742 which is 00:11 BST and segment got deleted by log cleaner at 12:12 BST.
[12:12:34,143] INFO Rolled new log segment for 'TOPICNAME-0' in 50 ms. (kafka.log.Log)
[12:12:34,143] INFO Scheduling log segment 47942 for log TOPICNAME-0 for deletion. (kafka.log.Log)
[12:12:34,147] INFO Incrementing log start offset of partition TOPICNAME-0 to 50749 in dir /home/test/data-5 (kafka.log.Log)
[12:12:34,149] INFO Cleared earliest 0 entries from epoch cache based on passed offset 50749 leaving 1 in EpochFile for partition TOPICNAME-0 (kafka.server.epoch.LeaderEpochFileCache)
[12:13:34,147] INFO Deleting segment 47942 from log TOPICNAME-0. (kafka.log.Log)
[12:13:34,148] INFO Deleting index /home/test/data-5/TOPICNAME-0/00000000000000047942.index.deleted (kafka.log.OffsetIndex)
[12:13:34,149] INFO Deleting index /home/test/data-5/TOPICNAME-0/00000000000000047942.timeindex.deleted (kafka.log.TimeIndex)
Please help us understand what is causing this issue and how to fix it.