Description
Hi all,
I am facing an issue with kafka-mirror-maker.sh.
We have 2 kafka clusters with the same configuration and mirror maker instances in charge of the mirroring between the clusters.
We haven't change the default configuration on the message size, so the 1000012 bytes limitation is expected on both clusters.
we are facing the following error at the mirroring side:
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] ERROR Error when sending message to topic my_topic_name with key: 81 bytes, value: 1000272 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept. Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] ERROR Error when sending message to topic my_topic_name with key: 81 bytes, value: 13846 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: java.lang.IllegalStateException: Producer is closed forcefully. Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:513) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:493) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:156) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at java.lang.Thread.run(Thread.java:745) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,511] FATAL [mirrormaker-thread-0] Mirror maker thread failure due to (kafka.tools.MirrorMaker$MirrorMakerThread) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: java.lang.IllegalStateException: Cannot send after the producer is closed. Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:185) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:474) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:657) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at scala.collection.Iterator$class.foreach(Iterator.scala:893) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at scala.collection.AbstractIterable.foreach(Iterable.scala:54) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:434)
Why am I getting this error ?
Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: [2017-09-21 14:30:49,431] ERROR Error when sending message to topic my_topic_name with key: 81 bytes, value: 1000272 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) Sep 21 14:30:49 lpa2e194 kafka-mirror-maker.sh: org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept.
How can mirror maker encounter a 1000272 bytes message while the kafka cluster being mirrored has the default limitation of 1000012 bytes for a message ?
Find the mirrormaker consumer and producer config files attached.
Thanks for your inputs.