Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
0.8.2.0
-
None
-
None
Description
It's possible for log cleaning to generate segments that have a gap of more than Int.MaxValue between their base offset and their last offset. It's not possible to index those segments since there's only 4 bytes available to store that difference. The broker will end up writing overflowed ints into the index, and doesn't detect that there is a problem until restarted, at which point you get one of these:
2015-03-16 20:35:49,632 FATAL [main] kafka.server.KafkaServerStartable - Fatal error during KafkaServerStartable startup. Prepare to shutdown
java.lang.IllegalArgumentException: requirement failed: Corrupt index found, index file (/mnt/persistent/kafka-logs/topic/00000000000000000000.index) has non-zero size but the last offset is -1634293959 and the base offset is 0
at scala.Predef$.require(Predef.scala:233)
at kafka.log.OffsetIndex.sanityCheck(OffsetIndex.scala:352)
at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:204)
at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:203)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.Log.loadSegments(Log.scala:203)
at kafka.log.Log.<init>(Log.scala:67)
at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$7$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:142)
at kafka.utils.Utils$$anon$1.run(Utils.scala:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)