This was done on a local cluster (non hdfs) and following are the steps
- Start a single node cluster and start an additional RS using local-regionservers.sh
- Through hbase shell add a new rs group
- Move one of the region servers to the new rsgroup
- Stop the regionserver which is left in the default rsgroup
The cluster becomes unusable even if the region server is restarted or even if all the services were brought down and brought up.
In 1.1.x version, the cluster recovers fine. Looks like meta is assigned to a dummy regionserver and when the regionserver gets restarted it gets assigned. The following is what we can see in master UI when the rs is down