Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
0.10.0.0
-
None
-
None
-
None
Description
After updating Kafka from 0.9.0.1 to 0.10.0.0 I'm getting plenty of `Error processing append operation on partition` errors. This happens with ruby-kafka as producer and enabled snappy compression.
[2016-05-27 20:00:11,074] ERROR [Replica Manager on Broker 2]: Error processing append operation on partition m2m-0 (kafka.server.ReplicaManager) kafka.common.KafkaException: at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:159) at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:85) at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64) at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56) at kafka.message.ByteBufferMessageSet$$anon$2.makeNextOuter(ByteBufferMessageSet.scala:357) at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:369) at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:324) at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64) at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30) at kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:427) at kafka.log.Log.liftedTree1$1(Log.scala:339) at kafka.log.Log.append(Log.scala:338) at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443) at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231) at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237) at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429) at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406) at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392) at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328) at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:405) at kafka.server.KafkaApis.handle(KafkaApis.scala:76) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: failed to read chunk at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:433) at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:167) at java.io.DataInputStream.readFully(DataInputStream.java:195) at java.io.DataInputStream.readLong(DataInputStream.java:416) at kafka.message.ByteBufferMessageSet$$anon$1.readMessageFromStream(ByteBufferMessageSet.scala:118) at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:153)
Attachments
Issue Links
- Is contained by
-
KAFKA-3789 Upgrade Snappy to fix snappy decompression errors
- Resolved