Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-10645

Forwarding a record from a punctuator sometimes it results in a NullPointerException

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 2.5.0
    • None
    • clients
    • None

    Description

      Hello,
      I am working on a java kafka stream application (v. 2.5.0) running on a kubernetes cluster.

      It´s a springboot application running with java 8.

      With the last upgrade to version 2.5.0 I started to see into the logs some NullPointerException that are happening when forwarding a record from a punctuator.
      This is the stacktrace of the exception

      Caused by: org.apache.kafka.streams.errors.StreamsException: task [2_2] Abort sending since an error caught with a previous record (timestamp 1603721062667) to topic reply-reminder-push-sender due to java.lang.NullPointerException\tat org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:240)\tat org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:111)\tat org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:89)\tat org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)\tat org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)\tat org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:133)\t... 24 common frames omittedCaused by: java.lang.NullPointerException: null\tat org.apache.kafka.common.record.DefaultRecord.sizeOf(DefaultRecord.java:613)\tat org.apache.kafka.common.record.DefaultRecord.recordSizeUpperBound(DefaultRecord.java:633)\tat org.apache.kafka.common.record.DefaultRecordBatch.estimateBatchSizeUpperBound(DefaultRecordBatch.java:534)\tat org.apache.kafka.common.record.AbstractRecords.estimateSizeInBytesUpperBound(AbstractRecords.java:135)\tat org.apache.kafka.common.record.AbstractRecords.estimateSizeInBytesUpperBound(AbstractRecords.java:125)\tat org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:914)\tat org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:862)\tat org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:181)\t... 29 common frames omitted
      

      Checking the code, it looks like it happens calculating the size of the record. There is one header that is null but I don´t think I can control those headers right?
      Thanks a lot

      Attachments

        Issue Links

          Activity

            People

              mjsax Matthias J. Sax
              filmac79 Filippo Machi
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated: