Index: src/docbkx/book.xml
===================================================================
--- src/docbkx/book.xml (revision 1097898)
+++ src/docbkx/book.xml (working copy)
@@ -231,7 +231,7 @@
- Region Server Metrics
+ RegionServer Metricshbase.regionserver.blockCacheCountBlock cache item count in memory. This is the number of blocks of storefiles (HFiles) in the cache.
@@ -266,22 +266,22 @@
TODOhbase.regionserver.memstoreSizeMB
- Sum of all the memstore sizes in this regionserver (MB)
+ Sum of all the memstore sizes in this RegionServer (MB)hbase.regionserver.regions
- Number of regions served by the regionserver
+ Number of regions served by the RegionServerhbase.regionserver.requests
- Total number of read and write requests. Requests correspond to regionserver RPC calls, thus a single Get will result in 1 request, but a Scan with caching set to 1000 will result in 1 request for each 'next' call (i.e., not each row). A bulk-load request will constitute 1 request per HFile.
+ Total number of read and write requests. Requests correspond to RegionServer RPC calls, thus a single Get will result in 1 request, but a Scan with caching set to 1000 will result in 1 request for each 'next' call (i.e., not each row). A bulk-load request will constitute 1 request per HFile.hbase.regionserver.storeFileIndexSizeMB
- Sum of all the storefile index sizes in this regionserver (MB)
+ Sum of all the storefile index sizes in this RegionServer (MB)hbase.regionserver.stores
- Number of stores open on the regionserver. A store corresponds to a column family. For example, if a table (which contains the column family) has 3 regions on a regionserver, there will be 3 stores open for that column family.
+ Number of stores open on the RegionServer. A store corresponds to a column family. For example, if a table (which contains the column family) has 3 regions on a RegionServer, there will be 3 stores open for that column family. hbase.regionserver.storeFiles
- Number of store filles open on the regionserver. A store may have more than one storefile (HFile).
+ Number of store filles open on the RegionServer. A store may have more than one storefile (HFile).
@@ -712,7 +712,7 @@
HTable
instances are not thread-safe. When creating HTable instances, it is advisable to use the same HBaseConfiguration
-instance. This will ensure sharing of zookeeper and socket instances to the region servers
+instance. This will ensure sharing of ZooKeeper and socket instances to the RegionServers
which is usually what you want. For example, this is preferred:
HBaseConfiguration conf = HBaseConfiguration.create();
HTable table1 = new HTable(conf, "myTable");
@@ -729,7 +729,7 @@
WriteBuffer and Batch MethodsIf is turned off on
HTable,
- Puts are sent to region servers when the writebuffer
+ Puts are sent to RegionServers when the writebuffer
is filled. The writebuffer is 2MB by default. Before an HTable instance is
discarded, either close() or
flushCommits() should be invoked so Puts
@@ -742,7 +742,7 @@
FiltersGet and Scan instances can be
- optionally configured with filters which are applied on the region server.
+ optionally configured with filters which are applied on the RegionServer.
@@ -796,7 +796,7 @@
There is not much memory footprint difference between 1 region
- and 10 in terms of indexes, etc, held by the regionserver.
+ and 10 in terms of indexes, etc, held by the RegionServer.
@@ -1118,17 +1118,17 @@
See .Node Decommission
- You can stop an individual regionserver by running the following
+ You can stop an individual RegionServer by running the following
script in the HBase directory on the particular node:
$ ./bin/hbase-daemon.sh stop regionserver
- The regionserver will first close all regions and then shut itself down.
- On shutdown, the regionserver's ephemeral node in ZooKeeper will expire.
- The master will notice the regionserver gone and will treat it as
- a 'crashed' server; it will reassign the nodes the regionserver was carrying.
+ The RegionServer will first close all regions and then shut itself down.
+ On shutdown, the RegionServer's ephemeral node in ZooKeeper will expire.
+ The master will notice the RegionServer gone and will treat it as
+ a 'crashed' server; it will reassign the nodes the RegionServer was carrying.
Disable the Load Balancer before Decommissioning a nodeIf the load balancer runs while a node is shutting down, then
there could be contention between the Load Balancer and the
- Master's recovery of the just decommissioned regionserver.
+ Master's recovery of the just decommissioned RegionServer.
Avoid any problems by disabling the balancer first.
See below.
@@ -1135,10 +1135,10 @@
- A downside to the above stop of a regionserver is that regions could be offline for
+ A downside to the above stop of a RegionServer is that regions could be offline for
a good period of time. Regions are closed in order. If many regions on the server, the
first region to close may not be back online until all regions close and after the master
- notices the regionserver's znode gone. In HBase 0.90.2, we added facility for having
+ notices the RegionServer's znode gone. In HBase 0.90.2, we added facility for having
a node gradually shed its load and then shutdown itself down. HBase 0.90.2 added the
graceful_stop.sh script. Here is its usage:
$ ./bin/graceful_stop.sh
@@ -1151,7 +1151,7 @@
hostname Hostname of server we are to stop
- To decommission a loaded regionserver, run the following:
+ To decommission a loaded RegionServer, run the following:
$ ./bin/graceful_stop.sh HOSTNAME
where HOSTNAME is the host carrying the RegionServer
you would decommission.
@@ -1157,8 +1157,8 @@
you would decommission.
On HOSTNAMEThe HOSTNAME passed to graceful_stop.sh
- must match the hostname that hbase is using to identify regionservers.
- Check the list of regionservers in the master UI for how HBase is
+ must match the hostname that hbase is using to identify RegionServers.
+ Check the list of RegionServers in the master UI for how HBase is
referring to servers. Its usually hostname but can also be FQDN.
Whatever HBase is using, this is what you should pass the
graceful_stop.sh decommission
@@ -1167,7 +1167,7 @@
currently running; the graceful unloading of regions will not run.
The graceful_stop.sh script will move the regions off the
- decommissioned regionserver one at a time to minimize region churn.
+ decommissioned RegionServer one at a time to minimize region churn.
It will verify the region deployed in the new location before it
will moves the next region and so on until the decommissioned server
is carrying zero regions. At this point, the graceful_stop.sh
@@ -1201,7 +1201,7 @@
$ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --reload --debug $i; done &> /tmp/log.txt &
Tail the output of /tmp/log.txt to follow the scripts
- progress. The above does regionservers only. Be sure to disable the
+ progress. The above does RegionServers only. Be sure to disable the
load balancer before doing the above. You'd need to do the master
update separately. Do it before you run the above script.
Here is a pseudo-script for how you might craft a rolling restart script:
@@ -1227,10 +1227,10 @@
- Run the graceful_stop.sh script per regionserver. For example:
+ Run the graceful_stop.sh script per RegionServer. For example:
$ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --reload --debug $i; done &> /tmp/log.txt &
- If you are running thrift or rest servers on the regionserver, pass --thrift or --rest options (See usage
+ If you are running thrift or rest servers on the RegionServer, pass --thrift or --rest options (See usage
for graceful_stop.sh script).