Affects Version/s: 0.8.1.1
Fix Version/s: 0.8.2.0
The maximum value for the topic-level config segment.bytes is Int.MaxInt (2147483647). Using this value causes brokers to corrupt their log files, leaving them unreadable.
We set segment.bytes to 2122317824 which is well below the maximum. One by one, the ISR of all partitions shrunk to 1. Brokers would crash when restarted, attempting to read from a negative offset in a log file. After discovering that many segment files had grown to 4GB or more, we were forced to shut down our entire production Kafka cluster for several hours while we split all segment files into 1GB chunks.
Looking into the kafka.log code, the segment.bytes parameter is used inconsistently. It is treated as a soft maximum for the size of the segment file (https://github.com/apache/kafka/blob/0.8.1.1/core/src/main/scala/kafka/log/LogConfig.scala#L26) with logs rolled only after (https://github.com/apache/kafka/blob/0.8.1.1/core/src/main/scala/kafka/log/Log.scala#L246) they exceed this value. However, much of the code that deals with log files uses ints to store the size of the file and the position in the file. Overflow of these ints leads the broker to append to the segments indefinitely, and to fail to read these segments for consuming or recovery.
This is trivial to reproduce:
After running for a few minutes, the log file is corrupt:
We recovered the data from the log files using a simple Python script: https://gist.github.com/also/9f823d9eb9dc0a410796