We use kafka with ZooKeeper via high level consumer.
There is a scheduled job that creates a consumer with specific group, does necessary logic and shuts down this consumer.
Nobody deletes /consumers/myGroup/ids/myGroup_<ip>_<postfix>. And after several job runs there are a lot of dead consumer IDs under myGroup. I've got an issue that new consumer doesn't see a partition.
We start to implement an approach to remove a consumer nodes from Zookeeper manually after consumer is shutted down.
I think better way to remove this node during ZookeeperConsumerConnector.shutdown().
If I missed something in your sources please let me know.