Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-3205

Error in I/O with host (java.io.EOFException) raised in producer

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Won't Fix
    • 0.8.2.1, 0.9.0.0
    • None
    • clients
    • None

    Description

      In a situation with a Kafka broker in 0.9 and producers still in 0.8.2.x, producers seems to raise the following after a variable amount of time since start :

      2016-01-29 14:33:13,066 WARN [] o.a.k.c.n.Selector: Error in I/O with 172.22.2.170
      java.io.EOFException: null
              at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62) ~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
              at org.apache.kafka.common.network.Selector.poll(Selector.java:248) ~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
              at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
              at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
              at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
              at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
      

      This can be reproduced successfully by doing the following :

      • Start a 0.8.2 producer connected to the 0.9 broker
      • Wait 15 minutes, exactly
      • See the error in the producer logs.

      Oddly, this also shows up in an active producer but after 10 minutes of activity.

      Kafka's server.properties :

      broker.id=1
      listeners=PLAINTEXT://:9092
      port=9092
      num.network.threads=2
      num.io.threads=2
      socket.send.buffer.bytes=1048576
      socket.receive.buffer.bytes=1048576
      socket.request.max.bytes=104857600
      log.dirs=/mnt/data/kafka
      num.partitions=4
      auto.create.topics.enable=false
      delete.topic.enable=true
      num.recovery.threads.per.data.dir=1
      log.retention.hours=48
      log.retention.bytes=524288000
      log.segment.bytes=52428800
      log.retention.check.interval.ms=60000
      log.roll.hours=24
      log.cleanup.policy=delete
      log.cleaner.enable=true
      zookeeper.connect=127.0.0.1:2181
      zookeeper.connection.timeout.ms=1000000
      

      Producer's configuration :

      	compression.type = none
      	metric.reporters = []
      	metadata.max.age.ms = 300000
      	metadata.fetch.timeout.ms = 60000
      	acks = all
      	batch.size = 16384
      	reconnect.backoff.ms = 10
      	bootstrap.servers = [127.0.0.1:9092]
      	receive.buffer.bytes = 32768
      	retry.backoff.ms = 500
      	buffer.memory = 33554432
      	timeout.ms = 30000
      	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
      	retries = 3
      	max.request.size = 5000000
      	block.on.buffer.full = true
      	value.serializer = class org.apache.kafka.common.serialization.StringSerializer
      	metrics.sample.window.ms = 30000
      	send.buffer.bytes = 131072
      	max.in.flight.requests.per.connection = 5
      	metrics.num.samples = 2
      	linger.ms = 0
      	client.id = 
      

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned
            nekonyuu Jonathan Raffre
            Flavio Paiva Junqueira Flavio Paiva Junqueira
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment