Steps to reproduce:
- Create a geode cluster with 1 locator and 2 servers.
- Create a region of type PARTITION_REDUNDANT.
- Put an entry into the region.
- Trigger a restore redundancy operation via the management REST API or gfsh.
- The result from the restore redundancy operation states that the actual redundancy for the region is -1. However, the expected redundancy at this point is 1 because there should be enough cache servers in the cluster to hold the redundant copy.
- Stop one of the servers.
- Trigger another restore redundancy operation via the management REST API or gfsh.
- The result from the second restore redundancy operation again states that the actual redundancy for the region is -1. However, the region should be counted as having zero redundant copies at this point because there is only one cache server.
I encountered this issue while using the management REST API, although the same issue happens in the gfsh command. I assume fixing the gfsh command would also fix the management REST API. If not, I can break this out into two separate JIRAs.