Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-15947

Null pointer on LZ4 compression since Kafka 3.6

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 3.6.0
    • 3.6.1
    • compression
    • None

    Description

      I have a Kafka Stream application running well since month using client version 3.5.1 }}with 3.5.1 (bitnami image: {{bitnami/3.5.1-debian-11-r44) using{{ compression.type: "lz4"}}

      I've recently updated a my kafka server to kafka 3.6 (bitnami image: 
      bitnami/kafka:3.6.0-debian-11-r0).
       
      The startup is working well for days, and after some time, Kafka Stream crash and Kafka output a lot of NullPointerException on the console: 
       

      org.apache.kafka.common.KafkaException: java.lang.NullPointerException: Cannot invoke "java.nio.ByteBuffer.hasArray()" because "this.intermediateBufRef" is null
      	at org.apache.kafka.common.record.CompressionType$4.wrapForInput(CompressionType.java:134)
      	at org.apache.kafka.common.record.DefaultRecordBatch.recordInputStream(DefaultRecordBatch.java:273)
      	at org.apache.kafka.common.record.DefaultRecordBatch.compressedIterator(DefaultRecordBatch.java:277)
      	at org.apache.kafka.common.record.DefaultRecordBatch.skipKeyValueIterator(DefaultRecordBatch.java:352)
      	at org.apache.kafka.storage.internals.log.LogValidator.validateMessagesAndAssignOffsetsCompressed(LogValidator.java:358)
      	at org.apache.kafka.storage.internals.log.LogValidator.validateMessagesAndAssignOffsets(LogValidator.java:165)
      	at kafka.log.UnifiedLog.$anonfun$append$2(UnifiedLog.scala:805)
      	at kafka.log.UnifiedLog.append(UnifiedLog.scala:1845)
      	at kafka.log.UnifiedLog.appendAsLeader(UnifiedLog.scala:719)
      	at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1313)
      	at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1301)
      	at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:1210)
      	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
      	at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
      	at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
      	at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
      	at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
      	at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
      	at scala.collection.TraversableLike.map(TraversableLike.scala:286)
      	at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
      	at scala.collection.AbstractTraversable.map(Traversable.scala:108)
      	at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:1198)
      	at kafka.server.ReplicaManager.$anonfun$appendRecords$18$adapted(ReplicaManager.scala:754)
      	at kafka.server.KafkaRequestHandler$.$anonfun$wrap$3(KafkaRequestHandler.scala:73)
      	at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:130)
      	at java.base/java.lang.Thread.run(Thread.java:833)
      Caused by: java.lang.NullPointerException: Cannot invoke "java.nio.ByteBuffer.hasArray()" because "this.intermediateBufRef" is null
      	at org.apache.kafka.common.utils.ChunkedBytesStream.<init>(ChunkedBytesStream.java:89)
      	at org.apache.kafka.common.record.CompressionType$4.wrapForInput(CompressionType.java:132)
      	... 25 more 

      At the same time the Kafka Stream raise this error:

       

      org.apache.kafka.streams.errors.StreamsException: Error encountered sending record to topic kestra_workertaskresult for task 3_6 due to:org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request.Written offsets would not be recorded and no more records would be sent since this is a fatal error.at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:297)at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.lambda$send$1(RecordCollectorImpl.java:284)at org.apache.kafka.clients.producer.KafkaProducer$AppendCallbacks.onCompletion(KafkaProducer.java:1505)at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:273)at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:234)at org.apache.kafka.clients.producer.internals.ProducerBatch.completeExceptionally(ProducerBatch.java:198)at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:772)at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:757)at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:709)at org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:648)at org.apache.kafka.clients.producer.internals.Sender.lambda$null$1(Sender.java:589)at java.base/java.util.ArrayList.forEach(Unknown Source)at org.apache.kafka.clients.producer.internals.Sender.lambda$handleProduceResponse$2(Sender.java:576)at java.base/java.lang.Iterable.forEach(Unknown Source)at org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:576)at org.apache.kafka.clients.producer.internals.Sender.lambda$sendProduceRequest$5(Sender.java:850)at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:154)at org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:594)at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:586)at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328)at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243)at java.base/java.lang.Thread.run(Unknown Source)Caused by: org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request.
       

      The error will loop on the Kafka Stream (pod are restarted, I supposed that topic are consumed on the same offset, and crash with the same error on both server and client). 

      Always, when I restart Kafka server, every goes well and I don't have any more issues.

      The error is mostly transitive and happen every few days without any others solution than restart the server.

       

       

       

       

       

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              tchiotludo Ludo
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: