Details
-
Bug
-
Status: Open
-
Blocker
-
Resolution: Unresolved
-
2.1.0
-
None
-
None
Description
We build a kafka cluster with 5 brokers. But one of brokers suddenly stopped running during the run. And it happened twice in the same broker. Here is the log and is this a bug in kafka ?
[2019-01-25 12:57:14,686] INFO [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Error sending fetch request (sessionId=1578860481, epoch=INITIAL) to node 2: java.io.IOException: Connection to 2 was disconnected before the response was read. (org.apache.kafka.clients.FetchSessionHandler)
[2019-01-25 12:57:14,687] WARN [ReplicaFetcher replicaId=3, leaderId=2, fetcherId=0] Error in response for fetch request (type=FetchRequest, replicaId=3, maxWait=500, minBytes=1, maxBytes=10485760, fetchData={api-result-bi-heatmap-8=(offset=0, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[4]), api-result-bi-heatmap-save-12=(offset=0, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[4]), api-result-bi-heatmap-task-2=(offset=2, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[4]), api-result-bi-flow-39=(offset=1883206, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[4]), __consumer_offsets-47=(offset=349437, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[4]), api-result-bi-heatmap-track-6=(offset=1039889, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[4]), api-result-bi-heatmap-task-17=(offset=0, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[4]), __consumer_offsets-2=(offset=0, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[4]), api-result-bi-heatmap-aggs-19=(offset=1255056, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[4])}, isolationLevel=READ_UNCOMMITTED, toForget=, metadata=(sessionId=1578860481, epoch=INITIAL)) (kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to 2 was disconnected before the response was read
at org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97)
at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97)
at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190)
at kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241)
at kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130)
at kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129)
at scala.Option.foreach(Option.scala:257)
at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
Attachments
Issue Links
- is duplicated by
-
KAFKA-7697 Possible deadlock in kafka.cluster.Partition
- Resolved