diff --git src/main/docbkx/ops_mgt.xml src/main/docbkx/ops_mgt.xml
index 6246fad..e403ac3 100644
--- src/main/docbkx/ops_mgt.xml
+++ src/main/docbkx/ops_mgt.xml
@@ -717,48 +717,113 @@ false
Rolling Restart
- You can also ask this script to restart a RegionServer after the shutdown AND move its
- old regions back into place. The latter you might do to retain data locality. A primitive
- rolling restart might be effected by running something like the following:
- $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --reload --debug $i; done &> /tmp/log.txt &
- Tail the output of /tmp/log.txt to follow the scripts progress.
- The above does RegionServers only. The script will also disable the load balancer before
- moving the regions. You'd need to do the master update separately. Do it before you run the
- above script. Here is a pseudo-script for how you might craft a rolling restart script:
-
-
- Untar your release, make sure of its configuration and then rsync it across the
- cluster. If this is 0.90.2, patch it with HBASE-3744 and HBASE-3756.
-
-
- Run hbck to ensure the cluster consistent
- $ ./bin/hbase hbck Effect repairs if inconsistent.
-
-
-
- Restart the Master:
- $ ./bin/hbase-daemon.sh stop master; ./bin/hbase-daemon.sh start master
-
-
-
- Run the graceful_stop.sh script per RegionServer. For
- example:
- $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --reload --debug $i; done &> /tmp/log.txt &
+ Some cluster configuration changes require either the entire cluster, or the
+ RegionServers, to be restarted in order to pick up the changes. In addition, rolling
+ restarts are supported for upgrading to a minor or maintenance release, and to a major
+ release if at all possible. See the release notes for release you want to upgrade to, to
+ find out about limitations to the ability to perform a rolling upgrade.
+ HBase ships with a script, bin/rolling-restart.sh, that allows you
+ to perform rolling restarts on the entire cluster, the master only, or the RegionServers
+ only.
+
+ rolling-restart.sh General Usage
+
+$ ./bin/rolling-restart.sh --help] [--rs-only] [--master-only] [--graceful] [--maxthreads xx]
+ ]]>
+
+
+
+ Rolling Restart on RegionServers Only
+
+ To perform a rolling restart on the RegionServers only, use the
+ --rs-only option. This might be necessary if you need to reboot the
+ individual RegionServer or if you make a configuration change that only affects
+ RegionServers and not the other HBase processes.
+
+
+
+ Rolling Restart on Masters Only
+
+ To perform a rolling restart on the active and backup Masters, use the
+ --master-only option. You might use this if you know that your
+ configuration change only affects the Master and not the RegionServers, or if you need
+ to restart the server where the active Master is running.
+ If you are not running backup Masters, the Master is simply restarted. If you are
+ running backup Masters, they are all stopped before any are restarted, to avoid a race
+ condition in ZooKeeper to determine which is the new Master. First the main Master is
+ restarted, then the backup Masters are restarted. Directly after restart, it checks for
+ and cleans out any regions in transition before taking on its normal workload.
+
+
+
+ Graceful Restart
+
+ If you specify the --graceful option, RegionServers are restarted
+ using the bin/graceful_stop.sh script, which moves regions off a
+ RegionServer before restarting it. This is safer, but can delay the restart.
+
+
+
+ Limiting the Number of Threads
+
+ To limit the rolling restart to using only a specific number of threads, use the
+ --maxthreads option.
+
+
+
+
+ Rolling Restart - Legacy
+
+ This section is preserved for legacy reasons. This was the advice given for rolling
+ restarts before the bin/rolling-restart.sh script was
+ documented.
+
+ You can also ask this script to restart a RegionServer after the shutdown AND move
+ its old regions back into place. The latter you might do to retain data locality. A
+ primitive rolling restart might be effected by running something like the
+ following:
+ $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --reload --debug $i; done &> /tmp/log.txt &
+ Tail the output of /tmp/log.txt to follow the scripts progress.
+ The above does RegionServers only. The script will also disable the load balancer before
+ moving the regions. You'd need to do the master update separately. Do it before you run
+ the above script. Here is a pseudo-script for how you might craft a rolling restart
+ script:
+
+
+ Untar your release, make sure of its configuration and then rsync it across the
+ cluster. If this is 0.90.2, patch it with HBASE-3744 and HBASE-3756.
+
+
+ Run hbck to ensure the cluster consistent
+ $ ./bin/hbase hbck Effect repairs if inconsistent.
+
+
+
+ Restart the Master:
+ $ ./bin/hbase-daemon.sh stop master; ./bin/hbase-daemon.sh start master
+
+
+
+ Run the graceful_stop.sh script per RegionServer. For
+ example:
+ $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --reload --debug $i; done &> /tmp/log.txt &
- If you are running thrift or rest servers on the RegionServer, pass --thrift or
- --rest options (See usage for graceful_stop.sh script).
-
-
- Restart the Master again. This will clear out dead servers list and reenable the
- balancer.
-
-
- Run hbck to ensure the cluster is consistent.
-
-
- It is important to drain HBase regions slowly when restarting regionservers. Otherwise,
- multiple regions go offline simultaneously as they are re-assigned to other nodes. Depending
- on your usage patterns, this might not be desirable.
+ If you are running thrift or rest servers on the RegionServer, pass --thrift or
+ --rest options (See usage for graceful_stop.sh script).
+
+
+ Restart the Master again. This will clear out dead servers list and reenable the
+ balancer.
+
+
+ Run hbck to ensure the cluster is consistent.
+
+
+ It is important to drain HBase regions slowly when restarting regionservers.
+ Otherwise, multiple regions go offline simultaneously as they are re-assigned to other
+ nodes. Depending on your usage patterns, this might not be desirable.
+