Description
Replica fetcher thread fails with messageSizeTooLarge exception. One theory is that this check is happening before decompress - assign offsets - compress phase. Hence the final compressed size can be different from that obtained from the produce request. This causes replica fetcher thread to be permanently down and prevents the broker from being in sync.
2013/02/20 02:19:25.447 ERROR [ReplicaFetcherThread] [ReplicaFetcherThread-0-274] [kafka] [] [ReplicaFetcherThread-0-274], Error due to
kafka.common.MessageSizeTooLargeException: Message size is 1000028 bytes which exceeds the maximum configured message size of 1000000.
at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:353)
at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:339)
at scala.collection.Iterator$class.foreach(Iterator.scala:631)
at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)
at kafka.log.Log.analyzeAndValidateMessageSet(Log.scala:339)
at kafka.log.Log.append(Log.scala:262)
at kafka.server.ReplicaFetcherThread.processPartitionData(ReplicaFetcherThread.scala:52)
at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$4.apply(AbstractFetcherThread.scala:130)
at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$4.apply(AbstractFetcherThread.scala:113)
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:125)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:344)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:344)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:113)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:89)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)