Seemingly after bdf2446ccce592f3c000290f11de88520327aa19, the controller may stop watching /admin/reassign_partitions node in ZooKeeper and consequently accept partition reassignment commands via ZooKeeper.
I'm not 100% sure that bdf2446ccce592f3c000290f11de88520327aa19 causes this, but it doesn't reproduce on 3fe6b5e951db8f7184a4098f8ad8a1afb2b2c1a0 - the one right before it.
Also, reproduces on the trunk HEAD a87decb9e4df5bfa092c26ae4346f65c426f1321.
1. Run ZooKeeper and two Kafka brokers.
2. Create a topic with 100 partitions and place them on Broker 0:
3. Add some data:
4. Create the partition reassignment node /admin/reassign_partitions in Zoo and shortly after that update the data in the node (even the same value will do). I made a simple Python script for this:
4. Observe that the controller doesn't react on further updates to /admin/reassign_partitions and doesn't delete the node.
Also, it can be confirmed with
that there is no watch on the node in ZooKeeper (for this, you should run ZooKeeper with 4lw.commands.whitelist=*).
Since it's about timing, it might not work on first attempt, so you might need to do 4 a couple of times. However, the reproducibility rate is pretty high.
The data in the topic and the big amount of partitions are not needed per se, only to make the timing more favourable.
Controller re-election will solve the issue, but a new controller can be put in this state the same way.
TBD, suggestions are welcome.