Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
1.0.0
-
None
-
None
Description
I am using kafka producer plugin for logback (danielwegener) with the clients library 1.0.0 and after restart of broker all my JVMs connected to it get tons of the exceptions:
11:22:48.738 [kafka-producer-network-thread | app-logback-relaxed] cid: clid: E [ @] a: o.a.k.c.p.internals.Sender - [Producer clientId=id-id-logback-relaxed] Uncaught error in kafka producer I/O thread: ex:java.lang.NullPointerException: null at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:436) at org.apache.kafka.common.network.Selector.poll(Selector.java:399) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163) at java.lang.Thread.run(Thread.java:798)
During restart there are still other brokers available behind LB.
Dosen't matter kafka is up again, only restarting JVM helps
<appender name="kafkaLogAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender"> <!-- This is the default encoder that encodes every log message to an utf8-encoded string --> <encoder> <pattern>%date{"yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"} ${HOSTNAME} [%thread] %logger{32} - %message ex:%exf%n</pattern> </encoder> <topic>mytopichere</topic> <!-- we don't care how the log messages will be partitioned --> <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" /> <!-- use async delivery. the application threads are not blocked by logging --> <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" /> <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) --> <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --> <!-- bootstrap.servers is the only mandatory producerConfig --> <producerConfig>bootstrap.servers=10.99.99.1:9092</producerConfig> <!-- don't wait for a broker to ack the reception of a batch. --> <producerConfig>acks=0</producerConfig> <!-- even if the producer buffer runs full, do not block the application but start to drop messages --> <producerConfig>block.on.buffer.full=false</producerConfig> <!-- define a client-id that you use to identify yourself against the kafka broker --> <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig> <!-- use gzip to compress each batch of log messages. valid values: none, gzip, snappy --> <!-- do przetestowania narzut --> <producerConfig>compression.type=none</producerConfig> <!-- there is no fallback <appender-ref>. If this appender cannot deliver, it will drop its messages. --> <producerConfig>max.block.ms=0</producerConfig> </appender>
I provide loadbalancer address in bootstrap servers here. There are three kafka brokers behind.
java version "1.7.0" Java(TM) SE Runtime Environment (build pap6470sr9fp60ifix-20161110_01(SR9 FP60)+IV90630+IV90578)) IBM J9 VM (build 2.6, JRE 1.7.0 AIX ppc64-64 Compressed References 20161005_321282 (JIT enabled, AOT enabled) J9VM - R26_Java726_SR9_20161005_1259_B321282 JIT - tr.r11_20161001_125404 GC - R26_Java726_SR9_20161005_1259_B321282_CMPRSS J9CL - 20161005_321282) JCL - 20161021_01 based on Oracle jdk7u121-b15