Details
Description
When you run connect distributed with 2 or more workers and config.storage.topic has more then 1 partition, you can end up with one of the workers rebalancing endlessly:
[2020-01-13 12:53:23,535] INFO [Worker clientId=connect-1, groupId=connect-cluster] Current config state offset 37 is behind group assignment 63, reading to end of config log (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2020-01-13 12:53:23,584] INFO [Worker clientId=connect-1, groupId=connect-cluster] Finished reading to end of log and updated config snapshot, new config log offset: 37 (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2020-01-13 12:53:23,584] INFO [Worker clientId=connect-1, groupId=connect-cluster] Current config state offset 37 does not match group assignment 63. Forcing rebalance. (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
In case any person viewing this doesn't know you are only ever meant to create this topic with one partition.
Suggested Solution
Make the connect worker check the partition count when it starts and if partition count is > 1 Kafka Connect stops and logs the reason why.
I think this is reasonable as it would stop users just starting out from building it incorrectly and would be easy to fix early. For those upgrading this would easily be caught in a PRE-PROD environment. And even if they upgraded directly in PROD you would only be impacted if upgraded all connect workers at the same time.
Attachments
Issue Links
- is fixed by
-
KAFKA-9216 Enforce connect internal topic configuration at startup
- Resolved
- relates to
-
KAFKA-9216 Enforce connect internal topic configuration at startup
- Resolved