One of our clusters is being used to:
- process transactional writes
- had ack set to all
We are using java client and followed all recommendation regarding avoiding dead fencing issues, etc.
We spotted the problem during upgrading kafka hosts to stronger machines:
- stop old broker
- start a new clean broker node (a different hostname) reusing the same broker.id
During the operation we found that while kafka is normally replicating partitions to recover after very short period of time (1 - 3 mins) we start to see error on kafka broker:
And we are starting to see records buffered on the Producer side, and eventually, the producer send requests failed with::
The only additional thing we observed is that for some reason couple of paritions ISR had been reduced to 1 and then back to 3 when broker finished up replication.
The same situation can be observed when adding new brokers to cluster and performing rebalacing (using kafka cruise control) and setting concurrent partition and leader movements to higher value.
This does not happen if broker is just stopped - even for longer period of time or restarted - this only happens during host replace.
Can you please let me know if this is a bug ... or we are doing something wrong?
min.insync.replica for topics is set to 1 (tested with set to 2 - no change)
replication.factor is 3
all transaction settings are currently default.