Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Won't Fix
-
0.8.2.1
-
None
-
None
-
3 linux nodes with both zookeepr & brokers running under respective users on each..
Description
simpleconsumer.fetch(req) throws a java.nio.channels.ClosedChannelException: null exception when the original leader fails, instead of being trapped in the fetchResponse api while consuming messages. My understanding was that any fetch failures can be found via fetchResponse.hasError() call and then be handled to fetch new leader in this case. Below is the relevant code snippet from the simple consumer with comments marking the line causing exception..can you please comment on this?
if (simpleconsumer == null) {
simpleconsumer = new SimpleConsumer(leaderAddress.getHostName(), leaderAddress.getPort(), consumerTimeout,
consumerBufferSize, consumerId);
}
FetchRequest req = new FetchRequestBuilder().clientId(getConsumerId())
.addFetch(topic, partition, offsetManager.getTempOffset(), consumerBufferSize)
// Note: the fetchSize might need to be increased
// if large batches are written to Kafka
.build();
// exception is throw at the below line
FetchResponse fetchResponse = simpleconsumer.fetch(req);
if (fetchResponse.hasError()) {
numErrors++;
etc...