I deploy a cluster of ZooKeeper with three nodes:
I shutdown the network interfaces of ofs_zk2 using "ifdown eth0 eth1" command.
It is supposed that the new Leader should be elected in some seconds, but the fact is, ofs_zk1 and ofs_zk3 just keep electing again and again, but none of them can become the new Leader.
I change the log level to DEBUG (the default is INFO), and restart zookeeper servers on ofs_zk1 and ofs_zk2 again, but it can not fix the problem.
I read the log and the ZooKeeper source code, and I think I find the reason.
When the potential leader(says ofs_zk3) begins the election(FastLeaderElection.lookForLeader()), it will send notifications to all the servers.
When it fails to receive any notification during a timeout, it will resend the notifications, and double the timeout. This process will repeat until any notification is received or the timeout reaches a max value.
The FastLeaderElection.sendNotifications() just put the notification message into a queue and return. The WorkerSender is responsable to send the notifications.
The WorkerSender just process the notifications one by one by passing the notifications to QuorumCnxManager. Here comes the problem, the QuorumCnxManager.toSend() blocks for a long time when the notification is send to ofs_zk2(whose network is down) and some notifications (which belongs to ofs_zk1) will thus be blocked for a long time. The repeated notifications by FastLeaderElection.sendNotifications() just make things worse.
Here is the related source code:
Therefore, when ofs_zk3 believes that it is the leader, it begins to wait the epoch ack, but in fact the ofs_zk1 does not receive the notification(which says the leader is ofs_zk3) because the ofs_zk3 has not sent the notification(which may still exist in the sendqueue of WorkerSender). At last, the potential leader ofs_zk3 fails to receive the epoch ack in timeout, so it quits the leader and begins a new election.
The log files of ofs_zk1 and ofs_zk3 are attached.