We use the management REST API to call rebalance immediately before stopping a server to limit the possibility of data loss. In a cluster with 3 locators, 3 servers, and no regions, we noticed that sometimes the rebalance operation never ends if one of the locators is restarting concurrently with the rebalance operation.
More specifically, the scenario where we see this issue crop up is during an automated "rolling restart" operation in a Kubernetes environment which proceeds as follows:
- At most one locator and one server are restarting at any point in time
- Each locator/server waits until the previous locator/server is fully online before restarting
- Immediately before stopping a server, a rebalance operation is performed and the server is not stopped until the rebalance operation is completed
The impact of this issue is that the "rolling restart" operation will never complete, because it cannot proceed with stopping a server until the rebalance operation is completed. A human is then required to intervene and manually trigger a rebalance and stop the server. This type of "rolling restart" operation is triggered fairly often in Kubernetes — any time part of the configuration of the locators or servers changes.
The following JSON is a sample response from the management REST API that shows the rebalance operation stuck in "IN_PROGRESS".