Details
-
Bug
-
Status: Open
-
Critical
-
Resolution: Unresolved
-
1.1.0
-
None
-
None
Description
We have a three node Kafka cluster running 1.1.0 and (global) log.retentions.ms configured to 7200000 (2h). (global) log.roll.ms=300000 is set to ensure we have new segments every 5 minutes, to clean up old ones granularly.
The topic policy has cleanup.policy = delete.
I can see some old segments being deleted, i.e.
Jul 16 08:59:17 server460 kafka[26553]: [2018-07-16 08:59:17,321] INFO [Log partition=test-topic-6, dir=/kafka] Deleting segment 18702312 (kafka.log.Log)
Jul 16 08:59:17 server460 kafka[26553]: [2018-07-16 08:59:17,329] INFO Deleted log /kafka/test-topic-6/00000000000018702312.log.deleted. (kafka.log.LogSegment)
Jul 16 08:59:17 server460 kafka[26553]: [2018-07-16 08:59:17,329] INFO Deleted offset index /kafka/test-topic-6/00000000000018702312.index.deleted. (kafka.log.LogSegment)
Jul 16 08:59:17 server460 kafka[26553]: [2018-07-16 08:59:17,329] INFO Deleted time index /kafka/test-topic-6/00000000000018702312.timeindex.deleted. (kafka.log.LogSegment)
But apparently not all of them. There are still old(er!) segments in the topic's dir, ultimately growing to fill-up the entire disk.
Are there any other configuration values that may affect old segment deletion or is this a bug?