diff --git src/main/docbkx/book.xml src/main/docbkx/book.xml index 023bd2b..b55c370 100644 --- src/main/docbkx/book.xml +++ src/main/docbkx/book.xml @@ -34,7 +34,7 @@ - + @@ -104,8 +104,7 @@ column contents:html is made up of the column family contents and html qualifier. - The colon character (:) delimits the column family from the + The colon character (:) delimits the column family from the column family qualifier. @@ -189,53 +188,58 @@ A namespace is a logical grouping of tables analogous to a database in relation database systems. This abstraction lays the groundwork for upcoming multi-tenancy related features: - Quota Management (HBASE-8410) - Restrict the amount of resources (ie - regions, tables) a namespace can consume. - Namespace Security Administration (HBASE-9206) - provide another - level of security administration for tenants. - Region server groups (HBASE-6721) - A namespace/table can be + Quota Management (HBASE-8410) - Restrict the amount of resources (ie + regions, tables) a namespace can consume. + Namespace Security Administration (HBASE-9206) - provide another + level of security administration for tenants. + Region server groups (HBASE-6721) - A namespace/table can be pinned onto a subset of regionservers thus guaranteeing a course level of - isolation. + isolation.
Namespace management A namespace can be created, removed or altered. Namespace membership is determined during - table creation by specifying a fully-qualified table name of the form: - - <table namespace>:<table qualifier> - - - Examples: - - + table creation by specifying a fully-qualified table name of the form: + + <table namespace>:<table qualifier> + + + + Examples + + #Create a namespace create_namespace 'my_ns' - + + #create my_table in my_ns namespace create 'my_ns:my_table', 'fam' - + + #drop namespace drop_namespace 'my_ns' - + + #alter namespace alter_namespace 'my_ns', {METHOD => 'set', 'PROPERTY_NAME' => 'PROPERTY_VALUE'} - - + +
Predefined namespaces There are two predefined special namespaces: - hbase - system namespace, used to contain hbase internal tables - default - tables with no explicit specified namespace will automatically - fall into this namespace. + hbase - system namespace, used to contain hbase internal tables + default - tables with no explicit specified namespace will automatically + fall into this namespace. - - Examples: + + Examples + #namespace=foo and table qualifier=bar create 'foo:bar', 'fam' @@ -243,7 +247,7 @@ create 'foo:bar', 'fam' #namespace=default and table qualifier=bar create 'bar', 'fam' - +
@@ -271,8 +275,8 @@ create 'bar', 'fam' courses:math are both members of the courses column family. The colon character (:) delimits the column family from the - column family qualifierColumn Family Qualifier. + >:) delimits the column family from the + column family qualifierColumn Family Qualifier. The column family prefix must be composed of printable characters. The qualifying tail, the column family qualifier, can be made of any @@ -651,9 +655,9 @@ htable.put(put);
ACID -
See ACID Semantics.
+        See ACID Semantics.
             Lars Hofhansl has also written a note on
-            ACID in HBase.
+ ACID in HBase.
@@ -794,7 +798,7 @@ public static class MyMapper extends TableMapper<Text, Text> {
HBase MapReduce Read/Write Example The following is an example of using HBase both as a source and as a sink with MapReduce. - This example will simply copy data from one table to another. + This example will simply copy data from one table to another. Configuration config = HBaseConfiguration.create(); Job job = new Job(config,"ExampleReadWrite"); @@ -823,11 +827,11 @@ if (!b) { throw new IOException("error with job!"); } - An explanation is required of what TableMapReduceUtil is doing, especially with the reducer. + An explanation is required of what TableMapReduceUtil is doing, especially with the reducer. TableOutputFormat is being used as the outputFormat class, and several parameters are being set on the config (e.g., TableOutputFormat.OUTPUT_TABLE), as well as setting the reducer output key to ImmutableBytesWritable and reducer value to Writable. - These could be set by the programmer on the job and conf, but TableMapReduceUtil tries to make things easier. + These could be set by the programmer on the job and conf, but TableMapReduceUtil tries to make things easier. The following is the example mapper, which will create a Put and matching the input Result and emit it. Note: this is what the CopyTable utility does. @@ -854,7 +858,6 @@ public static class MyMapper extends TableMapper<ImmutableBytesWritable, Put& This is just an example, developers could choose not to use TableOutputFormat and connect to the target table themselves. -
HBase MapReduce Read/Write Example With Multi-Table Output @@ -962,8 +965,8 @@ if (!b) { throw new IOException("error with job!"); } - As stated above, the previous Mapper can run unchanged with this example. - As for the Reducer, it is a "generic" Reducer instead of extending TableMapper and emitting Puts. + As stated above, the previous Mapper can run unchanged with this example. + As for the Reducer, it is a "generic" Reducer instead of extending TableMapper and emitting Puts. public static class MyReducer extends Reducer<Text, IntWritable, Text, IntWritable> { @@ -1082,18 +1085,18 @@ if (!b) { RDBMS can scale well, but only up to a point - specifically, the size of a single database server - and for the best performance requires specialized hardware and storage devices. HBase features of note are: - Strongly consistent reads/writes: HBase is not an "eventually consistent" DataStore. This - makes it very suitable for tasks such as high-speed counter aggregation. - Automatic sharding: HBase tables are distributed on the cluster via regions, and regions are - automatically split and re-distributed as your data grows. - Automatic RegionServer failover - Hadoop/HDFS Integration: HBase supports HDFS out of the box as its distributed file system. - MapReduce: HBase supports massively parallelized processing via MapReduce for using HBase as both - source and sink. - Java Client API: HBase supports an easy to use Java API for programmatic access. - Thrift/REST API: HBase also supports Thrift and REST for non-Java front-ends. - Block Cache and Bloom Filters: HBase supports a Block Cache and Bloom Filters for high volume query optimization. - Operational Management: HBase provides build-in web-pages for operational insight as well as JMX metrics. + Strongly consistent reads/writes: HBase is not an "eventually consistent" DataStore. This + makes it very suitable for tasks such as high-speed counter aggregation. + Automatic sharding: HBase tables are distributed on the cluster via regions, and regions are + automatically split and re-distributed as your data grows. + Automatic RegionServer failover + Hadoop/HDFS Integration: HBase supports HDFS out of the box as its distributed file system. + MapReduce: HBase supports massively parallelized processing via MapReduce for using HBase as both + source and sink. + Java Client API: HBase supports an easy to use Java API for programmatic access. + Thrift/REST API: HBase also supports Thrift and REST for non-Java front-ends. + Block Cache and Bloom Filters: HBase supports a Block Cache and Bloom Filters for high volume query optimization. + Operational Management: HBase provides build-in web-pages for operational insight as well as JMX metrics.
@@ -1140,15 +1143,15 @@ if (!b) { Key: - .META. region key (.META.,,1) + .META. region key (.META.,,1) Values: - info:regioninfo (serialized HRegionInfo - instance of .META.) - info:server (server:port of the RegionServer holding .META.) - info:serverstartcode (start-time of the RegionServer process holding .META.) + info:regioninfo (serialized HRegionInfo + instance of .META.) + info:server (server:port of the RegionServer holding .META.) + info:serverstartcode (start-time of the RegionServer process holding .META.) @@ -1158,16 +1161,16 @@ if (!b) { Key: - Region key of the format ([table],[region start key],[region id]) + Region key of the format ([table],[region start key],[region id]) Values: - info:regioninfo (serialized - HRegionInfo instance for this region) + info:regioninfo (serialized + HRegionInfo instance for this region) - info:server (server:port of the RegionServer containing this region) - info:serverstartcode (start-time of the RegionServer process containing this region) + info:server (server:port of the RegionServer containing this region) + info:serverstartcode (start-time of the RegionServer process containing this region) When a table is in the process of splitting two other columns will be created, info:splitA and info:splitB @@ -1355,7 +1358,7 @@ scan.setFilter(filter); See the Oracle JavaDoc for supported RegEx patterns in Java. -
SubstringComparator +
SubstringComparator SubstringComparator can be used to determine if a given substring exists in a value. The comparison is case-insensitive. @@ -1522,12 +1525,12 @@ rs.close();
Interface The methods exposed by HMasterInterface are primarily metadata-oriented methods: - Table (createTable, modifyTable, removeTable, enable, disable) - - ColumnFamily (addColumn, modifyColumn, removeColumn) - - Region (move, assign, unassign) - + Table (createTable, modifyTable, removeTable, enable, disable) + + ColumnFamily (addColumn, modifyColumn, removeColumn) + + Region (move, assign, unassign) + For example, when the HBaseAdmin method disableTable is invoked, it is serviced by the Master server. @@ -1555,9 +1558,9 @@ rs.close();
Interface The methods exposed by HRegionRegionInterface contain both data-oriented and region-maintenance methods: - Data (get, put, delete, next, etc.) + Data (get, put, delete, next, etc.) - Region (splitRegion, compactRegion, etc.) + Region (splitRegion, compactRegion, etc.) For example, when the HBaseAdmin method majorCompact is invoked on a table, the client is actually iterating through @@ -1599,14 +1602,14 @@ rs.close(); The Block Cache is an LRU cache that contains three levels of block priority to allow for scan-resistance and in-memory ColumnFamilies: - Single access priority: The first time a block is loaded from HDFS it normally has this priority and it will be part of the first group to be considered - during evictions. The advantage is that scanned blocks are more likely to get evicted than blocks that are getting more usage. + Single access priority: The first time a block is loaded from HDFS it normally has this priority and it will be part of the first group to be considered + during evictions. The advantage is that scanned blocks are more likely to get evicted than blocks that are getting more usage. - Mutli access priority: If a block in the previous priority group is accessed again, it upgrades to this priority. It is thus part of the second group - considered during evictions. + Mutli access priority: If a block in the previous priority group is accessed again, it upgrades to this priority. It is thus part of the second group + considered during evictions. - In-memory access priority: If the block's family was configured to be "in-memory", it will be part of this priority disregarding the number of times it - was accessed. Catalog tables are configured like this. This group is the last one considered during evictions. + In-memory access priority: If the block's family was configured to be "in-memory", it will be part of this priority disregarding the number of times it + was accessed. Catalog tables are configured like this. This group is the last one considered during evictions. @@ -1630,27 +1633,27 @@ rs.close(); make the process blocking from the point where it loads new blocks. Here are some examples: - One region server with the default heap size (1GB) and the default block cache size will have 217MB of block cache available. + One region server with the default heap size (1GB) and the default block cache size will have 217MB of block cache available. - 20 region servers with the heap size set to 8GB and a default block cache size will have 34GB of block cache. + 20 region servers with the heap size set to 8GB and a default block cache size will have 34GB of block cache. - 100 region servers with the heap size set to 24GB and a block cache size of 0.5 will have about 1TB of block cache. + 100 region servers with the heap size set to 24GB and a block cache size of 0.5 will have about 1TB of block cache. Your data isn't the only resident of the block cache, here are others that you may have to take into account: - Catalog tables: The -ROOT- and .META. tables are forced into the block cache and have the in-memory priority which means that they are harder to evict. The former never uses - more than a few hundreds of bytes while the latter can occupy a few MBs (depending on the number of regions). + Catalog tables: The -ROOT- and .META. tables are forced into the block cache and have the in-memory priority which means that they are harder to evict. The former never uses + more than a few hundreds of bytes while the latter can occupy a few MBs (depending on the number of regions). - HFiles indexes: HFile is the file format that HBase uses to store data in HDFS and it contains a multi-layered index in order seek to the data without having to read the whole file. + HFiles indexes: HFile is the file format that HBase uses to store data in HDFS and it contains a multi-layered index in order seek to the data without having to read the whole file. The size of those indexes is a factor of the block size (64KB by default), the size of your keys and the amount of data you are storing. For big data sets it's not unusual to see numbers around - 1GB per region server, although not all of it will be in cache because the LRU will evict indexes that aren't used. + 1GB per region server, although not all of it will be in cache because the LRU will evict indexes that aren't used. - Keys: Taking into account only the values that are being stored is missing half the picture since every value is stored along with its keys - (row key, family, qualifier, and timestamp). See . + Keys: Taking into account only the values that are being stored is missing half the picture since every value is stored along with its keys + (row key, family, qualifier, and timestamp). See . - Bloom filters: Just like the HFile indexes, those data structures (when enabled) are stored in the LRU. + Bloom filters: Just like the HFile indexes, those data structures (when enabled) are stored in the LRU. Currently the recommended way to measure HFile indexes and bloom filters sizes is to look at the region server web UI and checkout the relevant metrics. For keys, @@ -1660,14 +1663,14 @@ rs.close(); but you need to process 1TB of data. One of the reasons is that the churn generated by the evictions will trigger more garbage collections unnecessarily. Here are two use cases: - Fully random reading pattern: This is a case where you almost never access the same row twice within a short amount of time such that the chance of hitting a cached block is close + Fully random reading pattern: This is a case where you almost never access the same row twice within a short amount of time such that the chance of hitting a cached block is close to 0. Setting block caching on such a table is a waste of memory and CPU cycles, more so that it will generate more garbage to pick up by the JVM. For more information on monitoring GC, - see . + see . - Mapping a table: In a typical MapReduce job that takes a table in input, every row will be read only once so there's no need to put them into the block cache. The Scan object has + Mapping a table: In a typical MapReduce job that takes a table in input, every row will be read only once so there's no need to put them into the block cache. The Scan object has the option of turning this off via the setCaching method (set it to false). You can still keep block caching turned on on this table if you need fast random read access. An example would be counting the number of rows in a table that serves live traffic, caching every block of that table would create massive churn and would surely evict data that's currently in use. - +
@@ -1683,7 +1686,7 @@ rs.close(); This ensures that HBase has durable writes. Without WAL, there is the possibility of data loss in the case of a RegionServer failure before each MemStore is flushed and new StoreFiles are written. HLog is the HBase WAL implementation, and there is one HLog instance per RegionServer. - The WAL is in HDFS in /hbase/.logs/ with subdirectories per region. + The WAL is in HDFS in /hbase/.logs/ with subdirectories per region. For more general information about the concept of write ahead logs, see the Wikipedia Write-Ahead Log article. @@ -1755,13 +1758,14 @@ rs.close(); For a description of what HBase files look like when written to HDFS, see .
+ Considerations for Number of Regions In general, HBase is designed to run with a small (20-200) number of relatively large (5-20Gb) regions per server. The considerations for this are as follows:
Why cannot I have too many regions? Typically you want to keep your region count low on HBase for numerous reasons. Usually right around 100 regions per RegionServer has yielded the best results. - Here are some of the reasons below for keeping region count low: + Here are some of the reasons below for keeping region count low: MSLAB requires 2mb per memstore (that's 2mb per family per region). @@ -1791,12 +1795,13 @@ rs.close(); creating memory pressure or OOME on the RSs - + Another issue is the effect of the number of regions on mapreduce jobs; it is typical to have one mapper per HBase region. + Thus, hosting only 5 regions per RS may not be enough to get sufficient number of tasks for a mapreduce job, while 1000 regions will generate far too many tasks. + + See for configuration guidelines. +
- Another issue is the effect of the number of regions on mapreduce jobs; it is typical to have one mapper per HBase region. - Thus, hosting only 5 regions per RS may not be enough to get sufficient number of tasks for a mapreduce job, while 1000 regions will generate far too many tasks. - - See for configuration guidelines. +
@@ -1808,18 +1813,18 @@ rs.close(); Startup When HBase starts regions are assigned as follows (short version): - The Master invokes the AssignmentManager upon startup. + The Master invokes the AssignmentManager upon startup. - The AssignmentManager looks at the existing region assignments in META. + The AssignmentManager looks at the existing region assignments in META. - If the region assignment is still valid (i.e., if the RegionServer is still online) - then the assignment is kept. + If the region assignment is still valid (i.e., if the RegionServer is still online) + then the assignment is kept. - If the assignment is invalid, then the LoadBalancerFactory is invoked to assign the - region. The DefaultLoadBalancer will randomly assign the region to a RegionServer. + If the assignment is invalid, then the LoadBalancerFactory is invoked to assign the + region. The DefaultLoadBalancer will randomly assign the region to a RegionServer. - META is updated with the RegionServer assignment (if needed) and the RegionServer start codes - (start time of the RegionServer process) upon region opening by the RegionServer. + META is updated with the RegionServer assignment (if needed) and the RegionServer start codes + (start time of the RegionServer process) upon region opening by the RegionServer. @@ -1829,12 +1834,12 @@ rs.close(); Failover When a RegionServer fails (short version): - The regions immediately become unavailable because the RegionServer is down. + The regions immediately become unavailable because the RegionServer is down. - The Master will detect that the RegionServer has failed. + The Master will detect that the RegionServer has failed. - The region assignments will be considered invalid and will be re-assigned just - like the startup sequence. + The region assignments will be considered invalid and will be re-assigned just + like the startup sequence. @@ -1852,18 +1857,19 @@ rs.close();
Region-RegionServer Locality Over time, Region-RegionServer locality is achieved via HDFS block replication. - The HDFS client does the following by default when choosing locations to write replicas: + The HDFS client does the following by default when choosing locations to write replicas: - First replica is written to local node + First replica is written to local node - Second replica is written to a random node on another rack + Second replica is written to a random node on another rack - Third replica is written on the same rack as the second, but on a different node chosen randomly + Third replica is written on the same rack as the second, but on a different node chosen randomly - Subsequent replicas are written on random nodes on the cluster + Subsequent replicas are written on random nodes on the cluster See Replica Placement: The First Baby Steps on this page: HDFS Architecture - + + Thus, HBase eventually achieves locality for a region after a flush or a compaction. In a RegionServer failover situation a RegionServer may be assigned regions with non-local StoreFiles (because none of the replicas are local), however as new data is written @@ -1945,7 +1951,7 @@ myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName( the SSTable file described in the BigTable [2006] paper and on Hadoop's tfile (The unit test suite and the compression harness were taken directly from tfile). - Schubert Zhang's blog post on HFile: A Block-Indexed File Format to Store Sorted Key-Value Pairs makes for a thorough introduction to HBase's hfile. Matteo Bertozzi has also put up a + Schubert Zhang's blog post on HFile: A Block-Indexed File Format to Store Sorted Key-Value Pairs makes for a thorough introduction to HBase's hfile. Matteo Bertozzi has also put up a helpful description, HBase I/O: HFile. For more information, see the HFile source code. @@ -1988,21 +1994,21 @@ myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName( The KeyValue format inside a byte array is: - keylength - valuelength - key - value + keylength + valuelength + key + value The Key is further decomposed as: - rowlength - row (i.e., the rowkey) - columnfamilylength - columnfamily - columnqualifier - timestamp - keytype (e.g., Put, Delete, DeleteColumn, DeleteFamily) + rowlength + row (i.e., the rowkey) + columnfamilylength + columnfamily + columnqualifier + timestamp + keytype (e.g., Put, Delete, DeleteColumn, DeleteFamily) KeyValue instances are not split across blocks. @@ -2012,37 +2018,38 @@ myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName(
Example To emphasize the points above, examine what happens with two Puts for two different columns for the same row: - Put #1: rowkey=row1, cf:attr1=value1 - Put #2: rowkey=row1, cf:attr2=value2 + Put #1: rowkey=row1, cf:attr1=value1 + Put #2: rowkey=row1, cf:attr2=value2 Even though these are for the same row, a KeyValue is created for each column: Key portion for Put #1: - rowlength ------------> 4 - row -----------------> row1 - columnfamilylength ---> 2 - columnfamily --------> cf - columnqualifier ------> attr1 - timestamp -----------> server time of Put - keytype -------------> Put + rowlength ------------> 4 + row -----------------> row1 + columnfamilylength ---> 2 + columnfamily --------> cf + columnqualifier ------> attr1 + timestamp -----------> server time of Put + keytype -------------> Put Key portion for Put #2: - rowlength ------------> 4 - row -----------------> row1 - columnfamilylength ---> 2 - columnfamily --------> cf - columnqualifier ------> attr2 - timestamp -----------> server time of Put - keytype -------------> Put + rowlength ------------> 4 + row -----------------> row1 + columnfamilylength ---> 2 + columnfamily --------> cf + columnqualifier ------> attr2 + timestamp -----------> server time of Put + keytype -------------> Put + It is critical to understand that the rowkey, ColumnFamily, and column (aka columnqualifier) are embedded within + the KeyValue instance. The longer these identifiers are, the bigger the KeyValue is.
- It is critical to understand that the rowkey, ColumnFamily, and column (aka columnqualifier) are embedded within - the KeyValue instance. The longer these identifiers are, the bigger the KeyValue is. +
Compaction @@ -2074,16 +2081,16 @@ myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName( Important knobs: - hbase.store.compaction.ratio Ratio used in compaction - file selection algorithm (default 1.2f). - hbase.hstore.compaction.min (.90 hbase.hstore.compactionThreshold) (files) Minimum number - of StoreFiles per Store to be selected for a compaction to occur (default 2). - hbase.hstore.compaction.max (files) Maximum number of StoreFiles to compact per minor compaction (default 10). - hbase.hstore.compaction.min.size (bytes) + hbase.store.compaction.ratio Ratio used in compaction + file selection algorithm (default 1.2f). + hbase.hstore.compaction.min (.90 hbase.hstore.compactionThreshold) (files) Minimum number + of StoreFiles per Store to be selected for a compaction to occur (default 2). + hbase.hstore.compaction.max (files) Maximum number of StoreFiles to compact per minor compaction (default 10). + hbase.hstore.compaction.min.size (bytes) Any StoreFile smaller than this setting with automatically be a candidate for compaction. Defaults to - hbase.hregion.memstore.flush.size (128 mb). - hbase.hstore.compaction.max.size (.92) (bytes) - Any StoreFile larger than this setting with automatically be excluded from compaction (default Long.MAX_VALUE). + hbase.hregion.memstore.flush.size (128 mb). + hbase.hstore.compaction.max.size (.92) (bytes) + Any StoreFile larger than this setting with automatically be excluded from compaction (default Long.MAX_VALUE). The minor compaction StoreFile selection logic is size based, and selects a file for compaction when the file @@ -2092,26 +2099,27 @@ myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName(
Minor Compaction File Selection - Example #1 (Basic Example) - This example mirrors an example from the unit test TestCompactSelection. + This example mirrors an example from the unit test TestCompactSelection. - hbase.store.compaction.ratio = 1.0f - hbase.hstore.compaction.min = 3 (files) > - hbase.hstore.compaction.max = 5 (files) > - hbase.hstore.compaction.min.size = 10 (bytes) > - hbase.hstore.compaction.max.size = 1000 (bytes) > + hbase.store.compaction.ratio = 1.0f + hbase.hstore.compaction.min = 3 (files) + hbase.hstore.compaction.max = 5 (files) + hbase.hstore.compaction.min.size = 10 (bytes) + hbase.hstore.compaction.max.size = 1000 (bytes) + The following StoreFiles exist: 100, 50, 23, 12, and 12 bytes apiece (oldest to newest). With the above parameters, the files that would be selected for minor compaction are 23, 12, and 12. Why? - 100 --> No, because sum(50, 23, 12, 12) * 1.0 = 97. - 50 --> No, because sum(23, 12, 12) * 1.0 = 47. - 23 --> Yes, because sum(12, 12) * 1.0 = 24. - 12 --> Yes, because the previous file has been included, and because this - does not exceed the the max-file limit of 5 - 12 --> Yes, because the previous file had been included, and because this - does not exceed the the max-file limit of 5. + 100 --> No, because sum(50, 23, 12, 12) * 1.0 = 97. + 50 --> No, because sum(23, 12, 12) * 1.0 = 47. + 23 --> Yes, because sum(12, 12) * 1.0 = 24. + 12 --> Yes, because the previous file has been included, and because this + does not exceed the the max-file limit of 5 + 12 --> Yes, because the previous file had been included, and because this + does not exceed the the max-file limit of 5.
@@ -2119,11 +2127,11 @@ myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName( Minor Compaction File Selection - Example #2 (Not Enough Files To Compact) This example mirrors an example from the unit test TestCompactSelection. - hbase.store.compaction.ratio = 1.0f - hbase.hstore.compaction.min = 3 (files) > - hbase.hstore.compaction.max = 5 (files) > - hbase.hstore.compaction.min.size = 10 (bytes) > - hbase.hstore.compaction.max.size = 1000 (bytes) > + hbase.store.compaction.ratio = 1.0f + hbase.hstore.compaction.min = 3 (files) + hbase.hstore.compaction.max = 5 (files) + hbase.hstore.compaction.min.size = 10 (bytes) + hbase.hstore.compaction.max.size = 1000 (bytes) The following StoreFiles exist: 100, 25, 12, and 12 bytes apiece (oldest to newest). @@ -2131,35 +2139,35 @@ myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName( Why? - 100 --> No, because sum(25, 12, 12) * 1.0 = 47 - 25 --> No, because sum(12, 12) * 1.0 = 24 - 12 --> No. Candidate because sum(12) * 1.0 = 12, there are only 2 files to compact and that is less than the threshold of 3 - 12 --> No. Candidate because the previous StoreFile was, but there are not enough files to compact + 100 --> No, because sum(25, 12, 12) * 1.0 = 47 + 25 --> No, because sum(12, 12) * 1.0 = 24 + 12 --> No. Candidate because sum(12) * 1.0 = 12, there are only 2 files to compact and that is less than the threshold of 3 + 12 --> No. Candidate because the previous StoreFile was, but there are not enough files to compact
-
+
Minor Compaction File Selection - Example #3 (Limiting Files To Compact) This example mirrors an example from the unit test TestCompactSelection. - hbase.store.compaction.ratio = 1.0f - hbase.hstore.compaction.min = 3 (files) > - hbase.hstore.compaction.max = 5 (files) > - hbase.hstore.compaction.min.size = 10 (bytes) > - hbase.hstore.compaction.max.size = 1000 (bytes) > + hbase.store.compaction.ratio = 1.0f + hbase.hstore.compaction.min = 3 (files) + hbase.hstore.compaction.max = 5 (files) + hbase.hstore.compaction.min.size = 10 (bytes) + hbase.hstore.compaction.max.size = 1000 (bytes) The following StoreFiles exist: 7, 6, 5, 4, 3, 2, and 1 bytes apiece (oldest to newest). With the above parameters, the files that would be selected for minor compaction are 7, 6, 5, 4, 3. Why? - 7 --> Yes, because sum(6, 5, 4, 3, 2, 1) * 1.0 = 21. Also, 7 is less than the min-size - 6 --> Yes, because sum(5, 4, 3, 2, 1) * 1.0 = 15. Also, 6 is less than the min-size. - 5 --> Yes, because sum(4, 3, 2, 1) * 1.0 = 10. Also, 5 is less than the min-size. - 4 --> Yes, because sum(3, 2, 1) * 1.0 = 6. Also, 4 is less than the min-size. - 3 --> Yes, because sum(2, 1) * 1.0 = 3. Also, 3 is less than the min-size. - 2 --> No. Candidate because previous file was selected and 2 is less than the min-size, but the max-number of files to compact has been reached. - 1 --> No. Candidate because previous file was selected and 1 is less than the min-size, but max-number of files to compact has been reached. + 7 --> Yes, because sum(6, 5, 4, 3, 2, 1) * 1.0 = 21. Also, 7 is less than the min-size + 6 --> Yes, because sum(5, 4, 3, 2, 1) * 1.0 = 15. Also, 6 is less than the min-size. + 5 --> Yes, because sum(4, 3, 2, 1) * 1.0 = 10. Also, 5 is less than the min-size. + 4 --> Yes, because sum(3, 2, 1) * 1.0 = 6. Also, 4 is less than the min-size. + 3 --> Yes, because sum(2, 1) * 1.0 = 3. Also, 3 is less than the min-size. + 2 --> No. Candidate because previous file was selected and 2 is less than the min-size, but the max-number of files to compact has been reached. + 1 --> No. Candidate because previous file was selected and 1 is less than the min-size, but max-number of files to compact has been reached.
@@ -2183,11 +2191,11 @@ This feature is fully compatible with default compactions - it can be enabled fo
When to use You might want to consider using this feature if you have: - -large regions (in that case, you can get the positive effect of much smaller regions without additional memstore and region management overhead); or - +large regions (in that case, you can get the positive effect of much smaller regions without additional memstore and region management overhead); or + + non-uniform row keys, e.g. time dimension in a key (in that case, only the stripes receiving the new keys will keep compacting - old data will not compact as much, or at all). - + According to perf testing performed, in these case the read performance can improve somewhat, and the read and write performance variability due to compactions is greatly reduced. There's overall perf improvement on large, non-uniform row key regions (hash-prefixed timestamp key) over long term. All of these performance gains are best realized when table is already large. In future, the perf improvement might also extend to region splits. @@ -2222,21 +2230,23 @@ Based on your region sizing, you might want to also change your stripe sizing. B You can improve this pattern for your data. You should generally aim at stripe size of at least 1Gb, and about 8-12 stripes for uniform row keys - so, for example if your regions are 30 Gb, 12x2.5Gb stripes might be a good idea. The settings are as follows: - + SettingNotes hbase.store.stripe.initialStripeCount -Initial stripe count to create. You can use it as follows: - - +Initial stripe count to create. You can use it as follows: + + for relatively uniform row keys, if you know the approximate target number of stripes from the above, you can avoid some splitting overhead by starting w/several stripes (2, 5, 10...). Note that if the early data is not representative of overall row key distribution, this will not be as efficient. - + + for existing tables with lots of data, you can use this to pre-split stripes. - -for e.g. hash-prefixed sequential keys, with more than one hash prefix per region, you know that some pre-splitting makes sense. - + + +for e.g. hash-prefixed sequential keys, with more than one hash prefix per region, you know that some pre-splitting makes sense. + hbase.store.stripe.sizeToSplit @@ -2248,7 +2258,7 @@ Maximum stripe size before it's split. You can use this in conjunction with the The number of new stripes to create when splitting one. The default is 2, and is good for most cases. For non-uniform row keys, you might experiment with increasing the number somewhat (3-4), to isolate the arriving updates into narrower slice of the region with just one split instead of several. -
+
Memstore sizing @@ -2350,7 +2360,7 @@ All the settings that apply to normal compactions (file size limits, etc.) apply where importtsv or your MapReduce job put its results, and the table name to import into. For example: - $ hadoop jar hbase-VERSION.jar completebulkload [-c /path/to/hbase/config/hbase-site.xml] /user/todd/myoutput mytable + $ hadoop jar hbase-VERSION.jar completebulkload [-c /path/to/hbase/config/hbase-site.xml] /user/todd/myoutput mytable The -c config-file option can be used to specify a file containing the appropriate hbase parameters (e.g., hbase-site.xml) if @@ -2370,7 +2380,7 @@ All the settings that apply to normal compactions (file size limits, etc.) apply For more information about the referenced utilities, see and . - See How-to: Use HBase Bulk Loading, and Why + See How-to: Use HBase Bulk Loading, and Why for a recent blog on current state of bulk loading.
@@ -2418,12 +2428,11 @@ All the settings that apply to normal compactions (file size limits, etc.) apply - FAQ - + General When should I use HBase? @@ -2484,7 +2493,7 @@ All the settings that apply to normal compactions (file size limits, etc.) apply - + Where can I learn about the rest of the configuration options? @@ -2626,7 +2635,7 @@ identifying mode and a multi-phase read-write repair mode.
Running hbck to identify inconsistencies -To check to see if your HBase cluster has corruptions, run hbck against your HBase cluster: +To check to see if your HBase cluster has corruptions, run hbck against your HBase cluster: $ ./bin/hbase hbck @@ -2642,9 +2651,9 @@ listing of all the splits present in all the tables. $ ./bin/hbase hbck -details -If you just want to know if some tables are corrupted, you can limit hbck to identify inconsistencies +If you just want to know if some tables are corrupted, you can limit hbck to identify inconsistencies in only specific tables. For example the following command would only attempt to check table -TableFoo and TableBar. The benefit is that hbck will run in less time. +TableFoo and TableBar. The benefit is that hbck will run in less time. $ ./bin/hbase hbck TableFoo TableBar @@ -2659,12 +2668,12 @@ the hbck tool enabled with automatic repair options. There are two invariants that when violated create inconsistencies in HBase: - HBase’s region consistency invariant is satisfied if every region is assigned and + HBase’s region consistency invariant is satisfied if every region is assigned and deployed on exactly one region server, and all places where this state kept is in -accordance. +accordance. - HBase’s table integrity invariant is satisfied if for each table, every possible row key -resolves to exactly one region. + HBase’s table integrity invariant is satisfied if for each table, every possible row key +resolves to exactly one region. @@ -2695,11 +2704,11 @@ Region consistency requires that the HBase instance has the state of the region (.regioninfo files), the region’s row in the .META. table., and region’s deployment/assignments on region servers and the master in accordance. Options for repairing region consistency include: - -fixAssignments (equivalent to the 0.90 -fix option) repairs unassigned, incorrectly -assigned or multiply assigned regions. + -fixAssignments (equivalent to the 0.90 -fix option) repairs unassigned, incorrectly +assigned or multiply assigned regions. - -fixMeta which removes meta rows when corresponding regions are not present in -HDFS and adds new meta rows if they regions are present in HDFS while not in META. + -fixMeta which removes meta rows when corresponding regions are not present in + HDFS and adds new meta rows if they regions are present in HDFS while not in META. To fix deployment and assignment problems you can run this command: @@ -2707,48 +2716,48 @@ HDFS and adds new meta rows if they regions are present in HDFS while not in MET $ ./bin/hbase hbck -fixAssignments -To fix deployment and assignment problems as well as repairing incorrect meta rows you can -run this command:. +To fix deployment and assignment problems as well as repairing incorrect meta rows you can +run this command: $ ./bin/hbase hbck -fixAssignments -fixMeta -There are a few classes of table integrity problems that are low risk repairs. The first two are +There are a few classes of table integrity problems that are low risk repairs. The first two are degenerate (startkey == endkey) regions and backwards regions (startkey > endkey). These are automatically handled by sidelining the data to a temporary directory (/hbck/xxxx). -The third low-risk class is hdfs region holes. This can be repaired by using the: +The third low-risk class is hdfs region holes. This can be repaired by using the: - -fixHdfsHoles option for fabricating new empty regions on the file system. -If holes are detected you can use -fixHdfsHoles and should include -fixMeta and -fixAssignments to make the new region consistent. + -fixHdfsHoles option for fabricating new empty regions on the file system. +If holes are detected you can use -fixHdfsHoles and should include -fixMeta and -fixAssignments to make the new region consistent. $ ./bin/hbase hbck -fixAssignments -fixMeta -fixHdfsHoles -Since this is a common operation, we’ve added a the -repairHoles flag that is equivalent to the -previous command: +Since this is a common operation, we’ve added a the -repairHoles flag that is equivalent to the +previous command: $ ./bin/hbase hbck -repairHoles -If inconsistencies still remain after these steps, you most likely have table integrity problems -related to orphaned or overlapping regions. +If inconsistencies still remain after these steps, you most likely have table integrity problems +related to orphaned or overlapping regions.
Region Overlap Repairs -Table integrity problems can require repairs that deal with overlaps. This is a riskier operation +Table integrity problems can require repairs that deal with overlaps. This is a riskier operation because it requires modifications to the file system, requires some decision making, and may require some manual steps. For these repairs it is best to analyze the output of a hbck -details run so that you isolate repairs attempts only upon problems the checks identify. Because this is riskier, there are safeguard that should be used to limit the scope of the repairs. WARNING: This is a relatively new and have only been tested on online but idle HBase instances (no reads/writes). Use at your own risk in an active production environment! -The options for repairing table integrity violations include: +The options for repairing table integrity violations include: - -fixHdfsOrphans option for “adopting” a region directory that is missing a region -metadata file (the .regioninfo file). + -fixHdfsOrphans option for “adopting” a region directory that is missing a region +metadata file (the .regioninfo file). - -fixHdfsOverlaps ability for fixing overlapping regions + -fixHdfsOverlaps ability for fixing overlapping regions -When repairing overlapping regions, a region’s data can be modified on the file system in two +When repairing overlapping regions, a region’s data can be modified on the file system in two ways: 1) by merging regions into a larger region or 2) by sidelining regions by moving data to “sideline” directory where data could be restored later. Merging a large number of regions is technically correct but could result in an extremely large region that requires series of costly @@ -2757,58 +2766,58 @@ that overlap with the most other regions (likely the largest ranges) so that mer a more reasonable scale. Since these sidelined regions are already laid out in HBase’s native directory and HFile format, they can be restored by using HBase’s bulk load mechanism. The default safeguard thresholds are conservative. These options let you override the default -thresholds and to enable the large region sidelining feature. +thresholds and to enable the large region sidelining feature. - -maxMerge <n> maximum number of overlapping regions to merge + -maxMerge <n> maximum number of overlapping regions to merge - -sidelineBigOverlaps if more than maxMerge regions are overlapping, sideline attempt -to sideline the regions overlapping with the most other regions. + -sidelineBigOverlaps if more than maxMerge regions are overlapping, sideline attempt +to sideline the regions overlapping with the most other regions. - -maxOverlapsToSideline <n> if sidelining large overlapping regions, sideline at most n -regions. + -maxOverlapsToSideline <n> if sidelining large overlapping regions, sideline at most n +regions. -Since often times you would just want to get the tables repaired, you can use this option to turn -on all repair options: +Since often times you would just want to get the tables repaired, you can use this option to turn +on all repair options: - -repair includes all the region consistency options and only the hole repairing table -integrity options. + -repair includes all the region consistency options and only the hole repairing table +integrity options. -Finally, there are safeguards to limit repairs to only specific tables. For example the following -command would only attempt to check and repair table TableFoo and TableBar. - +Finally, there are safeguards to limit repairs to only specific tables. For example the following +command would only attempt to check and repair table TableFoo and TableBar. + $ ./bin/hbase hbck -repair TableFoo TableBar - +
Special cases: Meta is not properly assigned -There are a few special cases that hbck can handle as well. +There are a few special cases that hbck can handle as well. Sometimes the meta table’s only region is inconsistently assigned or deployed. In this case -there is a special -fixMetaOnly option that can try to fix meta assignments. - +there is a special -fixMetaOnly option that can try to fix meta assignments. + $ ./bin/hbase hbck -fixMetaOnly -fixAssignments - +
Special cases: HBase version file is missing -HBase’s data on the file system requires a version file in order to start. If this flie is missing, you +HBase’s data on the file system requires a version file in order to start. If this flie is missing, you can use the -fixVersionFile option to fabricating a new HBase version file. This assumes that -the version of hbck you are running is the appropriate version for the HBase cluster. +the version of hbck you are running is the appropriate version for the HBase cluster.
Special case: Root and META are corrupt. -The most drastic corruption scenario is the case where the ROOT or META is corrupted and +The most drastic corruption scenario is the case where the ROOT or META is corrupted and HBase will not start. In this case you can use the OfflineMetaRepair tool create new ROOT and META regions and tables. This tool assumes that HBase is offline. It then marches through the existing HBase home directory, loads as much information from region metadata files (.regioninfo files) as possible from the file system. If the region metadata has proper table integrity, it sidelines the original root and meta table directories, and builds new ones with pointers to the region directories and their -data. - +data. + $ ./bin/hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair - -NOTE: This tool is not as clever as uberhbck but can be used to bootstrap repairs that uberhbck + +NOTE: This tool is not as clever as uberhbck but can be used to bootstrap repairs that uberhbck can complete. -If the tool succeeds you should be able to start hbase and run online repairs if necessary. +If the tool succeeds you should be able to start hbase and run online repairs if necessary.
Special cases: Offline split parent @@ -3019,15 +3028,12 @@ hbase> describe 't1'
HFile format version 1 overview As we will be discussing the changes we are making to the HFile format, it is useful to give a short overview of the previous (HFile version 1) format. An HFile in the existing format is structured as follows: - + HFile Version 1 - - HFile Version 1 - - + Image courtesy of Lars George, hbase-architecture-101-storage.html. @@ -3062,15 +3068,11 @@ hbase> describe 't1' The version of HBase introducing the above features reads both version 1 and 2 HFiles, but only writes version 2 HFiles. A version 2 HFile is structured as follows: - + HFile Version 2 - - HFile Version 2 - - @@ -3338,7 +3340,7 @@ Comparator class used for Bloom filter keys, a UTF>8 encoded string stored usi
File Info format in versions 1 and 2 - The file info block is a serialized HbaseMapWritable (essentially a map from byte arrays to byte arrays) with the following keys, among others. StoreFile-level logic adds more keys to this. + The file info block is a serialized HbaseMapWritable (essentially a map from byte arrays to byte arrays) with the following keys, among others. StoreFile-level logic adds more keys to this. @@ -3476,8 +3478,8 @@ Comparator class used for Bloom filter keys, a UTF>8 encoded string stored usi In HBASE-7845, we generate a new key that is lexicographically larger than the last key of the previous block and lexicographically equal or smaller than the start key of the current block. While actual keys can potentially be very long, this "fake key" or "virtual key" can be much shorter. For example, if the stop key of previous block is "the quick brown fox", the start key of current block is "the who", we could use "the r" as our virtual key in our hfile index. There are two benefits to this: -
having shorter keys reduces the hfile index size, (allowing us to keep more indexes in memory), and
-
using something closer to the end key of the previous block allows us to avoid a potential extra IO when the target key lives in between the "virtual key" and the key of the first element in the target block.
+ having shorter keys reduces the hfile index size, (allowing us to keep more indexes in memory), and + using something closer to the end key of the previous block allows us to avoid a potential extra IO when the target key lives in between the "virtual key" and the key of the first element in the target block.
This optimization (implemented by the getShortMidpointKey method) is inspired by LevelDB's ByteWiseComparatorImpl::FindShortestSeparator() and FindShortSuccessor().
@@ -3487,10 +3489,10 @@ Comparator class used for Bloom filter keys, a UTF>8 encoded string stored usi
HBase Videos Introduction to HBase - Introduction to HBase by Todd Lipcon (Chicago Data Summit 2011). - - Introduction to HBase by Todd Lipcon (2010). - + Introduction to HBase by Todd Lipcon (Chicago Data Summit 2011). + + Introduction to HBase by Todd Lipcon (2010). + Building Real Time Services at Facebook with HBase by Jonathan Gray (Hadoop World 2011). @@ -3517,8 +3519,8 @@ Comparator class used for Bloom filter keys, a UTF>8 encoded string stored usi
HBase Sites Cloudera's HBase Blog has a lot of links to useful HBase information. - CAP Confusion is a relevant entry for background information on - distributed storage systems. + CAP Confusion is a relevant entry for background information on + distributed storage systems. @@ -3540,14 +3542,14 @@ Comparator class used for Bloom filter keys, a UTF>8 encoded string stored usi HBase History - 2006: BigTable paper published by Google. - - 2006 (end of year): HBase development starts. - - 2008: HBase becomes Hadoop sub-project. - - 2010: HBase becomes Apache top-level project. - + 2006: BigTable paper published by Google. + + 2006 (end of year): HBase development starts. + + 2008: HBase becomes Hadoop sub-project. + + 2010: HBase becomes Apache top-level project. + diff --git src/main/docbkx/case_studies.xml src/main/docbkx/case_studies.xml index 067a8b5..15169a8 100644 --- src/main/docbkx/case_studies.xml +++ src/main/docbkx/case_studies.xml @@ -1,13 +1,13 @@ - Apache HBase Case Studies -
- Overview - This chapter will describe a variety of performance and troubleshooting case studies that can +
+ Overview + This chapter will describe a variety of performance and troubleshooting case studies that can provide a useful blueprint on diagnosing Apache HBase cluster issues. - For more information on Performance and Troubleshooting, see and . - -
- -
- Schema Design - See the schema design case studies here: - - -
- -
- Performance/Troubleshooting - + For more information on Performance and Troubleshooting, see and . + +
+ +
+ Schema Design + See the schema design case studies here: + + +
+ +
+ Performance/Troubleshooting +
Case Study #1 (Performance Issue On A Single Node)
Scenario Following a scheduled reboot, one data node began exhibiting unusual behavior. Routine MapReduce - jobs run against HBase tables which regularly completed in five or six minutes began taking 30 or 40 minutes - to finish. These jobs were consistently found to be waiting on map and reduce tasks assigned to the troubled data node - (e.g., the slow map tasks all had the same Input Split). - The situation came to a head during a distributed copy, when the copy was severely prolonged by the lagging node. - -
+ jobs run against HBase tables which regularly completed in five or six minutes began taking 30 or 40 minutes + to finish. These jobs were consistently found to be waiting on map and reduce tasks assigned to the troubled data node + (e.g., the slow map tasks all had the same Input Split). + The situation came to a head during a distributed copy, when the copy was severely prolonged by the lagging node. + +
Hardware Datanodes: - - Two 12-core processors - Six Enerprise SATA disks - 24GB of RAM - Two bonded gigabit NICs - + + Two 12-core processors + Six Enerprise SATA disks + 24GB of RAM + Two bonded gigabit NICs + Network: - - 10 Gigabit top-of-rack switches - 20 Gigabit bonded interconnects between racks. - + + 10 Gigabit top-of-rack switches + 20 Gigabit bonded interconnects between racks. +
Hypotheses -
HBase "Hot Spot" Region - We hypothesized that we were experiencing a familiar point of pain: a "hot spot" region in an HBase table, - where uneven key-space distribution can funnel a huge number of requests to a single HBase region, bombarding the RegionServer - process and cause slow response time. Examination of the HBase Master status page showed that the number of HBase requests to the - troubled node was almost zero. Further, examination of the HBase logs showed that there were no region splits, compactions, or other region transitions - in progress. This effectively ruled out a "hot spot" as the root cause of the observed slowness. +
HBase "Hot Spot" Region + We hypothesized that we were experiencing a familiar point of pain: a "hot spot" region in an HBase table, + where uneven key-space distribution can funnel a huge number of requests to a single HBase region, bombarding the RegionServer + process and cause slow response time. Examination of the HBase Master status page showed that the number of HBase requests to the + troubled node was almost zero. Further, examination of the HBase logs showed that there were no region splits, compactions, or other region transitions + in progress. This effectively ruled out a "hot spot" as the root cause of the observed slowness.
-
HBase Region With Non-Local Data - Our next hypothesis was that one of the MapReduce tasks was requesting data from HBase that was not local to the datanode, thus - forcing HDFS to request data blocks from other servers over the network. Examination of the datanode logs showed that there were very - few blocks being requested over the network, indicating that the HBase region was correctly assigned, and that the majority of the necessary - data was located on the node. This ruled out the possibility of non-local data causing a slowdown. +
HBase Region With Non-Local Data + Our next hypothesis was that one of the MapReduce tasks was requesting data from HBase that was not local to the datanode, thus + forcing HDFS to request data blocks from other servers over the network. Examination of the datanode logs showed that there were very + few blocks being requested over the network, indicating that the HBase region was correctly assigned, and that the majority of the necessary + data was located on the node. This ruled out the possibility of non-local data causing a slowdown.
-
Excessive I/O Wait Due To Swapping Or An Over-Worked Or Failing Hard Disk +
Excessive I/O Wait Due To Swapping Or An Over-Worked Or Failing Hard Disk After concluding that the Hadoop and HBase were not likely to be the culprits, we moved on to troubleshooting the datanode's hardware. - Java, by design, will periodically scan its entire memory space to do garbage collection. If system memory is heavily overcommitted, the Linux - kernel may enter a vicious cycle, using up all of its resources swapping Java heap back and forth from disk to RAM as Java tries to run garbage - collection. Further, a failing hard disk will often retry reads and/or writes many times before giving up and returning an error. This can manifest - as high iowait, as running processes wait for reads and writes to complete. Finally, a disk nearing the upper edge of its performance envelope will - begin to cause iowait as it informs the kernel that it cannot accept any more data, and the kernel queues incoming data into the dirty write pool in memory. - However, using vmstat(1) and free(1), we could see that no swap was being used, and the amount of disk IO was only a few kilobytes per second. + Java, by design, will periodically scan its entire memory space to do garbage collection. If system memory is heavily overcommitted, the Linux + kernel may enter a vicious cycle, using up all of its resources swapping Java heap back and forth from disk to RAM as Java tries to run garbage + collection. Further, a failing hard disk will often retry reads and/or writes many times before giving up and returning an error. This can manifest + as high iowait, as running processes wait for reads and writes to complete. Finally, a disk nearing the upper edge of its performance envelope will + begin to cause iowait as it informs the kernel that it cannot accept any more data, and the kernel queues incoming data into the dirty write pool in memory. + However, using vmstat(1) and free(1), we could see that no swap was being used, and the amount of disk IO was only a few kilobytes per second.
-
Slowness Due To High Processor Usage +
Slowness Due To High Processor Usage Next, we checked to see whether the system was performing slowly simply due to very high computational load. top(1) showed that the system load - was higher than normal, but vmstat(1) and mpstat(1) showed that the amount of processor being used for actual computation was low. + was higher than normal, but vmstat(1) and mpstat(1) showed that the amount of processor being used for actual computation was low.
-
Network Saturation (The Winner) +
Network Saturation (The Winner) Since neither the disks nor the processors were being utilized heavily, we moved on to the performance of the network interfaces. The datanode had two - gigabit ethernet adapters, bonded to form an active-standby interface. ifconfig(8) showed some unusual anomalies, namely interface errors, overruns, framing errors. - While not unheard of, these kinds of errors are exceedingly rare on modern hardware which is operating as it should: - + gigabit ethernet adapters, bonded to form an active-standby interface. ifconfig(8) showed some unusual anomalies, namely interface errors, overruns, framing errors. + While not unheard of, these kinds of errors are exceedingly rare on modern hardware which is operating as it should: + $ /sbin/ifconfig bond0 bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:10.x.x.x Bcast:10.x.x.255 Mask:255.255.255.0 @@ -118,9 +118,9 @@ RX bytes:2416328868676 (2.4 TB) TX bytes:3464991094001 (3.4 TB) These errors immediately lead us to suspect that one or more of the ethernet interfaces might have negotiated the wrong line speed. This was confirmed both by running an ICMP ping - from an external host and observing round-trip-time in excess of 700ms, and by running ethtool(8) on the members of the bond interface and discovering that the active interface - was operating at 100Mbs/, full duplex. - + from an external host and observing round-trip-time in excess of 700ms, and by running ethtool(8) on the members of the bond interface and discovering that the active interface + was operating at 100Mbs/, full duplex. + $ sudo ethtool eth0 Settings for eth0: Supported ports: [ TP ] @@ -148,44 +148,44 @@ Wake-on: g Current message level: 0x00000003 (3) Link detected: yes - - In normal operation, the ICMP ping round trip time should be around 20ms, and the interface speed and duplex should read, "1000MB/s", and, "Full", respectively. - -
-
-
Resolution - After determining that the active ethernet adapter was at the incorrect speed, we used the ifenslave(8) command to make the standby interface - the active interface, which yielded an immediate improvement in MapReduce performance, and a 10 times improvement in network throughput: - - On the next trip to the datacenter, we determined that the line speed issue was ultimately caused by a bad network cable, which was replaced. - -
-
+ + In normal operation, the ICMP ping round trip time should be around 20ms, and the interface speed and duplex should read, "1000MB/s", and, "Full", respectively. + +
+
+
Resolution + After determining that the active ethernet adapter was at the incorrect speed, we used the ifenslave(8) command to make the standby interface + the active interface, which yielded an immediate improvement in MapReduce performance, and a 10 times improvement in network throughput: + + On the next trip to the datacenter, we determined that the line speed issue was ultimately caused by a bad network cable, which was replaced. + +
+
Case Study #2 (Performance Research 2012) Investigation results of a self-described "we're not sure what's wrong, but it seems slow" problem. - http://gbif.blogspot.com/2012/03/hbase-performance-evaluation-continued.html + http://gbif.blogspot.com/2012/03/hbase-performance-evaluation-continued.html
- +
Case Study #3 (Performance Research 2010)) - Investigation results of general cluster performance from 2010. Although this research is on an older version of the codebase, this writeup - is still very useful in terms of approach. - http://hstack.org/hbase-performance-testing/ + Investigation results of general cluster performance from 2010. Although this research is on an older version of the codebase, this writeup + is still very useful in terms of approach. + http://hstack.org/hbase-performance-testing/
- +
Case Study #4 (xcievers Config) Case study of configuring xceivers, and diagnosing errors from mis-configurations. - http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html + http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html See also .
- -
- - + +
+ + diff --git src/main/docbkx/configuration.xml src/main/docbkx/configuration.xml index b858458..67c44df 100644 --- src/main/docbkx/configuration.xml +++ src/main/docbkx/configuration.xml @@ -173,7 +173,7 @@ needed for servers to pick up changes (caveat dynamic config. to be described la A useful read setting config on you hadoop cluster is Aaron Kimballs' Configuration + xlink:href="http://www.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore/">Configuration Parameters: What can you just ignore?
@@ -510,8 +510,7 @@ homed on the node h-24-30.example.com. Now skip to for how to start and verify your pseudo-distributed install. - See Pseudo-distributed - mode extras for notes on how to start extra Masters and + See for notes on how to start extra Masters and RegionServers when running pseudo-distributed. @@ -678,11 +677,7 @@ homed on the node h-24-30.example.com. bin/start-hbase.sh - Run the above from the - - HBASE_HOME - - directory. + Run the above from the HBASE_HOME directory. You should now have a running HBase instance. HBase logs can be found in the logs subdirectory. Check them out @@ -754,7 +749,79 @@ stopping hbase............... Shutdown can take a moment to The generated file is a docbook section with a glossary in it--> - + + +
+ + + This file is fallback content. If you are seeing this, something is wrong with the build of the HBase documentation or you are doing pre-build verification. + + + The file hbase-default.xml is generated as part of + the build of the hbase site. See the hbase pom.xml. + The generated file is a docbook glossary. + +
+ IDs that are auto-generated and cause validation errors if not present + + Each of these is a reference to a configuration file parameter which will cause an error if you are using the fallback content here. This is a dirty dirty hack. + +
+ fail.fast.expired.active.master + +
+
+ "hbase.hregion.memstore.flush.size" + +
+
+ hbase.hstore.bytes.per.checksum + +
+
+ hbase.online.schema.update.enable + +
+
+ hbase.regionserver.global.memstore.size + +
+
+ hbase.hregion.max.filesize + +
+
+ hbase.hstore.BlockingStoreFiles + +
+
+ hfile.block.cache.size + +
+
+ copy.table + +
+
+ hbase.hstore.checksum.algorithm + +
+
+ hbase.zookeeper.useMulti + +
+
+ hbase.hregion.memstore.block.multiplier + +
+
+ hbase.regionserver.global.memstore.size.lower.limit + +
+
+
+
+
diff --git src/main/docbkx/developer.xml src/main/docbkx/developer.xml index 3475c8d..5e0a490 100644 --- src/main/docbkx/developer.xml +++ src/main/docbkx/developer.xml @@ -118,8 +118,8 @@ git clone git://github.com/apache/hbase.git Maven Classpath Variable The M2_REPO classpath variable needs to be set up for the project. This needs to be set to your local Maven repository, which is usually ~/.m2/repository - If this classpath variable is not configured, you will see compile errors in Eclipse like this... - +If this classpath variable is not configured, you will see compile errors in Eclipse like this: + Description Resource Path Location Type The project cannot be built until build path errors are resolved hbase Unknown Java Problem Unbound classpath variable: 'M2_REPO/asm/asm/3.1/asm-3.1.jar' in project 'hbase' hbase Build path Build Path Problem @@ -223,51 +223,52 @@ mvn compile -Dcompile-protobuf -Dprotoc.path=/opt/local/bin/protoc poms when you build. For now, just be aware of the difference between HBase 1.x builds and those of HBase 0.96-0.98. Below we will come back to this difference when we list out build instructions. -
+ + Publishing to maven requires you sign the artifacts you want to upload. To have the build do this for you, you need to make sure you have a properly configured settings.xml in your local repository under .m2. Here is my ~/.m2/settings.xml. - <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" + - <servers> - <!- To publish a snapshot of some part of Maven --> - <server> - <id>apache.snapshots.https</id> - <username>YOUR_APACHE_ID - </username> - <password>YOUR_APACHE_PASSWORD - </password> - </server> - <!-- To publish a website using Maven --> - <!-- To stage a release of some part of Maven --> - <server> - <id>apache.releases.https</id> - <username>YOUR_APACHE_ID - </username> - <password>YOUR_APACHE_PASSWORD - </password> - </server> - </servers> - <profiles> - <profile> - <id>apache-release</id> - <properties> - <gpg.keyname>YOUR_KEYNAME</gpg.keyname> - <!--Keyname is something like this ... 00A5F21E... do gpg --list-keys to find it--> - <gpg.passphrase>YOUR_KEY_PASSWORD - </gpg.passphrase> - </properties> - </profile> - </profiles> -</settings> + + + + apache.snapshots.https + YOUR_APACHE_ID + + YOUR_APACHE_PASSWORD + + + + + + apache.releases.https + YOUR_APACHE_ID + + YOUR_APACHE_PASSWORD + + + + + + apache-release + + YOUR_KEYNAME + + YOUR_KEY_PASSWORD + + + + +]]> You must use maven 3.0.x (Check by running mvn -version). - +
Making a Release Candidate I'll explain by running through the process. See later in this section for more detail on particular steps. @@ -500,17 +501,17 @@ under test - no use of the HBaseTestingUtility or minicluster is allowed (or eve given the dependency tree).
Running Tests in other Modules - If the module you are developing in has no other dependencies on other HBase modules, then - you can cd into that module and just run: + If the module you are developing in has no other dependencies on other HBase modules, then + you can cd into that module and just run: mvn test - which will just run the tests IN THAT MODULE. If there are other dependencies on other modules, + which will just run the tests IN THAT MODULE. If there are other dependencies on other modules, then you will have run the command from the ROOT HBASE DIRECTORY. This will run the tests in the other modules, unless you specify to skip the tests in that module. For instance, to skip the tests in the hbase-server module, - you would run: + you would run: mvn clean test -PskipServerTests - from the top level directory to run all the tests in modules other than hbase-server. Note that you + from the top level directory to run all the tests in modules other than hbase-server. Note that you can specify to skip tests in multiple modules as well as just for a single module. For example, to skip - the tests in hbase-server and hbase-common, you would run: + the tests in hbase-server and hbase-common, you would run: mvn clean test -PskipServerTests -PskipCommonTests Also, keep in mind that if you are running tests in the hbase-server module you will need to apply the maven profiles discussed in to get the tests to run properly. @@ -540,7 +541,7 @@ The first three categories, small, medium, and large are for tests run when you type $ mvn test; i.e. these three categorizations are for HBase unit tests. The integration category is for not for unit tests but for integration tests. These are run when you invoke $ mvn verify. Integration tests -are described in integration tests section and will not be discussed further +are described in and will not be discussed further in this section on HBase unit tests. Apache HBase uses a patched maven surefire plugin and maven profiles to implement @@ -578,7 +579,7 @@ the developer machine as well.
Integration Tests<indexterm><primary>IntegrationTests</primary></indexterm> Integration tests are system level tests. See -integration tests section for more info. + for more info.
@@ -703,17 +704,17 @@ should not impact these resources, it's worth checking these log lines General rules -As much as possible, tests should be written as category small tests. +As much as possible, tests should be written as category small tests. -All tests must be written to support parallel execution on the same machine, hence they should not use shared resources as fixed ports or fixed file names. +All tests must be written to support parallel execution on the same machine, hence they should not use shared resources as fixed ports or fixed file names. -Tests should not overlog. More than 100 lines/second makes the logs complex to read and use i/o that are hence not available for the other tests. +Tests should not overlog. More than 100 lines/second makes the logs complex to read and use i/o that are hence not available for the other tests. -Tests can be written with HBaseTestingUtility. -This class offers helper functions to create a temp directory and do the cleanup, or to start a cluster. +Tests can be written with HBaseTestingUtility. +This class offers helper functions to create a temp directory and do the cleanup, or to start a cluster.
@@ -721,19 +722,19 @@ This class offers helper functions to create a temp directory and do the cleanup Categories and execution time -All tests must be categorized, if not they could be skipped. +All tests must be categorized, if not they could be skipped. -All tests should be written to be as fast as possible. +All tests should be written to be as fast as possible. -Small category tests should last less than 15 seconds, and must not have any side effect. +Small category tests should last less than 15 seconds, and must not have any side effect. -Medium category tests should last less than 50 seconds. +Medium category tests should last less than 50 seconds. -Large category tests should last less than 3 minutes. This should ensure a good parallelization for people using it, and ease the analysis when the test fails. +Large category tests should last less than 3 minutes. This should ensure a good parallelization for people using it, and ease the analysis when the test fails.
@@ -861,17 +862,17 @@ are running other tests. -ChaosMonkey defines Action's and Policy's. Actions are sequences of events. We have at least the following actions: +ChaosMonkey defines Action's and Policy's. Actions are sequences of events. We have at least the following actions: -Restart active master (sleep 5 sec) -Restart random regionserver (sleep 5 sec) -Restart random regionserver (sleep 60 sec) -Restart META regionserver (sleep 5 sec) -Restart ROOT regionserver (sleep 5 sec) -Batch restart of 50% of regionservers (sleep 5 sec) -Rolling restart of 100% of regionservers (sleep 5 sec) +Restart active master (sleep 5 sec) +Restart random regionserver (sleep 5 sec) +Restart random regionserver (sleep 60 sec) +Restart META regionserver (sleep 5 sec) +Restart ROOT regionserver (sleep 5 sec) +Batch restart of 50% of regionservers (sleep 5 sec) +Rolling restart of 100% of regionservers (sleep 5 sec) - + Policies on the other hand are responsible for executing the actions based on a strategy. The default policy is to execute a random action every minute based on predefined action weights. ChaosMonkey executes predefined named policies until it is stopped. More than one @@ -880,11 +881,12 @@ policy can be active at any time. To run ChaosMonkey as a standalone tool deploy your HBase cluster as usual. ChaosMonkey uses the configuration -from the bin/hbase script, thus no extra configuration needs to be done. You can invoke the ChaosMonkey by running: +from the bin/hbase script, thus no extra configuration needs to be done. You can invoke the ChaosMonkey by running: bin/hbase org.apache.hadoop.hbase.util.ChaosMonkey - + This will output smt like: - + + 12/11/19 23:21:57 INFO util.ChaosMonkey: Using ChaosMonkey Policy: class org.apache.hadoop.hbase.util.ChaosMonkey$PeriodicRandomActionPolicy, period:60000 12/11/19 23:21:57 INFO util.ChaosMonkey: Sleeping for 26953 to add jitter 12/11/19 23:22:24 INFO util.ChaosMonkey: Performing action: Restart active master @@ -920,8 +922,8 @@ This will output smt like: 12/11/19 23:24:26 INFO hbase.ClusterManager: Executed remote command, exit code:0 , output:starting regionserver, logging to /homes/enis/code/hbase-0.94/bin/../logs/hbase-enis-regionserver-rs3.example.com.out 12/11/19 23:24:27 INFO util.ChaosMonkey: Started region server:rs3.example.com,60020,1353367027826. Reported num of rs:6 - - + + As you can see from the log, ChaosMonkey started the default PeriodicRandomActionPolicy, which is configured with all the available actions, and ran RestartActiveMaster and RestartRandomRs actions. ChaosMonkey tool, if run from command line, will keep on running until the process is killed.
@@ -957,7 +959,7 @@ mvn compile The above will build against whatever explicit hadoop 1.x version we have in our pom.xml as our '1.0' version. Tests may not all pass so you may need to pass -DskipTests unless you are inclined to fix the failing tests. - + 'dependencyManagement.dependencies.dependency.artifactId' for org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does not match a valid id pattern You will see ERRORs like the above title if you pass the default profile; e.g. if you pass hadoop.profile=1.1 when building 0.96 or @@ -1000,12 +1002,12 @@ pecularity that is probably fixable but we've not spent the time trying to figur
Jira Priorities The following is a guideline on setting Jira issue priorities: - Blocker: Should only be used if the issue WILL cause data loss or cluster instability reliably. - Critical: The issue described can cause data loss or cluster instability in some cases. - Major: Important but not tragic issues, like updates to the client API that will add a lot of much-needed functionality or significant - bugs that need to be fixed but that don't cause data loss. - Minor: Useful enhancements and annoying but not damaging bugs. - Trivial: Useful enhancements but generally cosmetic. + Blocker: Should only be used if the issue WILL cause data loss or cluster instability reliably. + Critical: The issue described can cause data loss or cluster instability in some cases. + Major: Important but not tragic issues, like updates to the client API that will add a lot of much-needed functionality or significant + bugs that need to be fixed but that don't cause data loss. + Minor: Useful enhancements and annoying but not damaging bugs. + Trivial: Useful enhancements but generally cosmetic.
@@ -1160,10 +1162,9 @@ pecularity that is probably fixable but we've not spent the time trying to figur Please submit one patch-file per Jira. For example, if multiple files are changed make sure the selected resource when generating the patch is a directory. Patch files can reflect changes in multiple files. - Generating patches using git: - -$ git diff --no-prefix > HBASE_XXXX.patch - + Generating patches using git: +$ git diff --no-prefix > HBASE_XXXX.patch + Don't forget the 'no-prefix' option; and generate the diff from the root directory of project Make sure you review for code style. @@ -1282,11 +1283,10 @@ Bar bar = foo.getBar(); <--- imagine there's an extra space(s) after the
Javadoc - This is also a very common feedback item. Don't forget Javadoc! + This is also a very common feedback item. Don't forget Javadoc! Javadoc warnings are checked during precommit. If the precommit tool gives you a '-1', please fix the javadoc issue. Your patch won't be committed if it adds such warnings. -
Findbugs @@ -1344,25 +1344,25 @@ Bar bar = foo.getBar(); <--- imagine there's an extra space(s) after the - Do not delete the old patch file + Do not delete the old patch file - version your new patch file using a simple scheme like this: - HBASE-{jira number}-{version}.patch - e.g: - HBASE_XXXX-v2.patch + version your new patch file using a simple scheme like this: + HBASE-{jira number}-{version}.patch + e.g: + HBASE_XXXX-v2.patch - 'Cancel Patch' on JIRA.. bug status will change back to Open + 'Cancel Patch' on JIRA.. bug status will change back to Open - Attach new patch file (e.g. HBASE_XXXX-v2.patch) using 'Files --> Attach' + Attach new patch file (e.g. HBASE_XXXX-v2.patch) using 'Files --> Attach' - Click on 'Submit Patch'. Now the bug status will say 'Patch Available'. + Click on 'Submit Patch'. Now the bug status will say 'Patch Available'. - Committers will review the patch. Rinse and repeat as many times as needed :-) + Committers will review the patch. Rinse and repeat as many times as needed :-)
@@ -1371,32 +1371,32 @@ Bar bar = foo.getBar(); <--- imagine there's an extra space(s) after the At times you may want to break a big change into mulitple patches. Here is a sample work-flow using git - patch 1: + patch 1: - $ git diff --no-prefix > HBASE_XXXX-1.patch + $ git diff --no-prefix > HBASE_XXXX-1.patch - patch 2: + patch 2: - create a new git branch - $ git checkout -b my_branch + create a new git branch + $ git checkout -b my_branch - save your work - $ git add file1 file2 - $ git commit -am 'saved after HBASE_XXXX-1.patch' - now you have your own branch, that is different from remote master branch + save your work + $ git add file1 file2 + $ git commit -am 'saved after HBASE_XXXX-1.patch' + now you have your own branch, that is different from remote master branch - make more changes... + make more changes... - create second patch - $ git diff --no-prefix > HBASE_XXXX-2.patch + create second patch + $ git diff --no-prefix > HBASE_XXXX-2.patch diff --git src/main/docbkx/external_apis.xml src/main/docbkx/external_apis.xml index e4c9f23..59f34a9 100644 --- src/main/docbkx/external_apis.xml +++ src/main/docbkx/external_apis.xml @@ -27,8 +27,8 @@ */ --> Apache HBase External APIs - This chapter will cover access to Apache HBase either through non-Java languages, or through custom protocols. - + This chapter will cover access to Apache HBase either through non-Java languages, or through custom protocols. +
Non-Java Languages Talking to the JVM Currently the documentation on this topic in the @@ -172,7 +172,7 @@ Example4: =, 'substring:abc123' will match everything that begins with the substring "abc123"
-
Example PHP Client Program that uses the Filter Language +
Example PHP Client Program that uses the Filter Language <? $_SERVER['PHP_ROOT'] = realpath(dirname(__FILE__).'/..'); require_once $_SERVER['PHP_ROOT'].'/flib/__flib.php'; @@ -205,7 +205,7 @@ - + “(RowFilter (=, ‘binary:Row 1’) AND TimeStampsFilter (74689, 89734)) OR @@ -216,7 +216,7 @@ 1) The key-value pair must be in a column that is lexicographically >= abc and < xyz  - + @@ -228,7 +228,7 @@
-
Individual Filter Syntax +
Individual Filter Syntax KeyOnlyFilter diff --git src/main/docbkx/getting_started.xml src/main/docbkx/getting_started.xml index cd47284..c99adf8 100644 --- src/main/docbkx/getting_started.xml +++ src/main/docbkx/getting_started.xml @@ -1,13 +1,13 @@ - Getting Started - +
Introduction - + will get you up and - running on a single-node, standalone instance of HBase. + running on a single-node, standalone instance of HBase.
- +
Quick Start - + This guide describes setup of a standalone HBase instance. It will - run against the local filesystem. In later sections we will take you through - how to run HBase on Apache Hadoop's HDFS, a distributed filesystem. This section - shows you how to create a table in HBase, inserting - rows into your new HBase table via the HBase shell, and then cleaning - up and shutting down your standalone, local filesystem-based HBase instance. The below exercise - should take no more than ten minutes (not including download time). + run against the local filesystem. In later sections we will take you through + how to run HBase on Apache Hadoop's HDFS, a distributed filesystem. This section + shows you how to create a table in HBase, inserting + rows into your new HBase table via the HBase shell, and then cleaning + up and shutting down your standalone, local filesystem-based HBase instance. The below exercise + should take no more than ten minutes (not including download time). Local Filesystem and Durability - Using HBase with a LocalFileSystem does not currently guarantee durability. + Using HBase with a LocalFileSystem does not currently guarantee durability. The HDFS local filesystem implementation will lose edits if files are not properly closed -- which is very likely to happen when experimenting with a new download. - You need to run HBase on HDFS to ensure all writes are preserved. Running - against the local filesystem though will get you off the ground quickly and get you - familiar with how the general system works so lets run with it for now. See - and its associated issues for more details. + You need to run HBase on HDFS to ensure all writes are preserved. Running + against the local filesystem though will get you off the ground quickly and get you + familiar with how the general system works so lets run with it for now. See + and its associated issues for more details. - Loopback IP - - The below advice is for hbase-0.94.x and older versions only. We believe this fixed in hbase-0.96.0 and beyond -(let us know if we have it wrong). There should be no need of the below modification to /etc/hosts in -later versions of HBase. - - HBase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions, - for example, will default to 127.0.1.1 and this will cause problems for you - See Why does HBase care about /etc/hosts? for detail.. - - /etc/hosts should look something like this: - + Loopback IP + The below advice is for hbase-0.94.x and older versions only. We believe this fixed in hbase-0.96.0 and beyond + (let us know if we have it wrong). There should be no need of the below modification to /etc/hosts in + later versions of HBase. + + HBase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions, + for example, will default to 127.0.1.1 and this will cause problems for you + See Why does HBase care about /etc/hosts? for detail.. + + /etc/hosts should look something like this: + 127.0.0.1 localhost 127.0.0.1 ubuntu.ubuntu-domain ubuntu - - - - + + + +
Download and unpack the latest stable release. - + Choose a download site from this list of Apache Download - Mirrors. Click on the suggested top link. This will take you to a - mirror of HBase Releases. Click on the folder named - stable and then download the file that ends in - .tar.gz to your local filesystem; e.g. - hbase-0.94.2.tar.gz. - + xlink:href="http://www.apache.org/dyn/closer.cgi/hbase/">Apache Download + Mirrors. Click on the suggested top link. This will take you to a + mirror of HBase Releases. Click on the folder named + stable and then download the file that ends in + .tar.gz to your local filesystem; e.g. + hbase-0.94.2.tar.gz. + Decompress and untar your download and then change into the - unpacked directory. - + unpacked directory. + $ tar xfz hbase-.tar.gz $ cd hbase- - + At this point, you are ready to start HBase. But before starting - it, edit conf/hbase-site.xml, the file you write - your site-specific configurations into. Set - hbase.rootdir, the directory HBase writes data to, - and hbase.zookeeper.property.dataDir, the directory - ZooKeeper writes its data too: -<?xml version="1.0"?> + it, edit conf/hbase-site.xml, the file you write + your site-specific configurations into. Set + hbase.rootdir, the directory HBase writes data to, + and hbase.zookeeper.property.dataDir, the directory + ZooKeeper writes its data too: + <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> @@ -111,63 +110,63 @@ $ cd hbase- <value>/DIRECTORY/zookeeper</value> </property> </configuration> Replace DIRECTORY in the above with the - path to the directory you would have HBase and ZooKeeper write their data. By default, - hbase.rootdir is set to /tmp/hbase-${user.name} - and similarly so for the default ZooKeeper data location which means you'll lose all - your data whenever your server reboots unless you change it (Most operating systems clear - /tmp on restart). + path to the directory you would have HBase and ZooKeeper write their data. By default, + hbase.rootdir is set to /tmp/hbase-${user.name} + and similarly so for the default ZooKeeper data location which means you'll lose all + your data whenever your server reboots unless you change it (Most operating systems clear + /tmp on restart).
- +
Start HBase - + Now start HBase:$ ./bin/start-hbase.sh starting Master, logging to logs/hbase-user-master-example.org.out - + You should now have a running standalone HBase instance. In - standalone mode, HBase runs all daemons in the the one JVM; i.e. both - the HBase and ZooKeeper daemons. HBase logs can be found in the - logs subdirectory. Check them out especially if - it seems HBase had trouble starting. - + standalone mode, HBase runs all daemons in the the one JVM; i.e. both + the HBase and ZooKeeper daemons. HBase logs can be found in the + logs subdirectory. Check them out especially if + it seems HBase had trouble starting. + Is <application>java</application> installed? - + All of the above presumes a 1.6 version of Oracle - java is installed on your machine and - available on your path (See ); i.e. when you type - java, you see output that describes the - options the java program takes (HBase requires java 6). If this is not - the case, HBase will not start. Install java, edit - conf/hbase-env.sh, uncommenting the - JAVA_HOME line pointing it to your java install, then, - retry the steps above. + java is installed on your machine and + available on your path (See ); i.e. when you type + java, you see output that describes the + options the java program takes (HBase requires java 6). If this is not + the case, HBase will not start. Install java, edit + conf/hbase-env.sh, uncommenting the + JAVA_HOME line pointing it to your java install, then, + retry the steps above.
- +
Shell Exercises - + Connect to your running HBase via the shell. - + $ ./bin/hbase shell HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version: 0.90.0, r1001068, Fri Sep 24 13:55:42 PDT 2010 hbase(main):001:0> - + Type help and then - <RETURN> to see a listing of shell commands and - options. Browse at least the paragraphs at the end of the help emission - for the gist of how variables and command arguments are entered into the - HBase shell; in particular note how table names, rows, and columns, - etc., must be quoted. - - Create a table named test with a single column family named cf. - Verify its creation by listing all tables and then insert some - values. - + <RETURN> to see a listing of shell commands and + options. Browse at least the paragraphs at the end of the help emission + for the gist of how variables and command arguments are entered into the + HBase shell; in particular note how table names, rows, and columns, + etc., must be quoted. + + Create a table named test with a single column family named cf. + Verify its creation by listing all tables and then insert some + values. + hbase(main):003:0> create 'test', 'cf' 0 row(s) in 1.2200 seconds hbase(main):003:0> list 'test' @@ -179,59 +178,59 @@ hbase(main):005:0> put 'test', 'row2', 'cf:b', 'value2' 0 row(s) in 0.0370 seconds hbase(main):006:0> put 'test', 'row3', 'cf:c', 'value3' 0 row(s) in 0.0450 seconds - + Above we inserted 3 values, one at a time. The first insert is at - row1, column cf:a with a value of - value1. Columns in HBase are comprised of a column family prefix -- - cf in this example -- followed by a colon and then a - column qualifier suffix (a in this case). - + row1, column cf:a with a value of + value1. Columns in HBase are comprised of a column family prefix -- + cf in this example -- followed by a colon and then a + column qualifier suffix (a in this case). + Verify the data insert by running a scan of the table as follows - + hbase(main):007:0> scan 'test' ROW COLUMN+CELL row1 column=cf:a, timestamp=1288380727188, value=value1 row2 column=cf:b, timestamp=1288380738440, value=value2 row3 column=cf:c, timestamp=1288380747365, value=value3 3 row(s) in 0.0590 seconds - + Get a single row - + hbase(main):008:0> get 'test', 'row1' COLUMN CELL cf:a timestamp=1288380727188, value=value1 1 row(s) in 0.0400 seconds - + Now, disable and drop your table. This will clean up all done - above. - + above. + hbase(main):012:0> disable 'test' 0 row(s) in 1.0930 seconds hbase(main):013:0> drop 'test' 0 row(s) in 0.0770 seconds - + Exit the shell by typing exit. - + hbase(main):014:0> exit
- +
Stopping HBase - + Stop your hbase instance by running the stop script. - + $ ./bin/stop-hbase.sh stopping hbase...............
- +
Where to go next - + The above described standalone setup is good for testing and - experiments only. In the next chapter, , - we'll go into depth on the different HBase run modes, system requirements - running HBase, and critical configurations setting up a distributed HBase deploy. + experiments only. In the next chapter, , + we'll go into depth on the different HBase run modes, system requirements + running HBase, and critical configurations setting up a distributed HBase deploy.
- +
diff --git src/main/docbkx/ops_mgt.xml src/main/docbkx/ops_mgt.xml index aab8928..fda3e19 100644 --- src/main/docbkx/ops_mgt.xml +++ src/main/docbkx/ops_mgt.xml @@ -27,9 +27,9 @@ */ --> Apache HBase Operational Management - This chapter will cover operational tools and practices required of a running Apache HBase cluster. + This chapter will cover operational tools and practices required of a running Apache HBase cluster. The subject of operations is related to the topics of , , - and but is a distinct topic in itself. + and but is a distinct topic in itself.
HBase Tools and Utilities @@ -57,7 +57,7 @@ private static final int INIT_ERROR_EXIT_CODE = 2; private static final int TIMEOUT_ERROR_EXIT_CODE = 3; private static final int ERROR_EXIT_CODE = 4; Here are some examples based on the following given case. There are two HTable called test-01 and test-02, they have two column family cf1 and cf2 respectively, and deployed on the 3 regionservers. see following table. - + RegionServertest-01test-02 @@ -65,7 +65,7 @@ Here are some examples based on the following given case. There are two HTable c rs1r1 r2rs2r2 rs3r2 r1 -
+ Following are some examples based on the previous given case.
Canary test for every column family (store) of every region of every table @@ -130,7 +130,7 @@ This run sets the timeout value to 60 seconds, the default value is 600 seconds.
Health Checker You can configure HBase to run a script on a period and if it fails N times (configurable), have the server exit. - See HBASE-7351 Periodic health check script for configurations and detail. + See HBASE-7351 Periodic health check script for configurations and detail.
@@ -214,17 +214,17 @@ Valid program names are: Options: - starttime Beginning of the time range. Without endtime means starttime to forever. - endtime End of the time range. Without endtime means starttime to forever. - versions Number of cell versions to copy. - new.name New table's name. - peer.adr Address of the peer cluster given in the format hbase.zookeeper.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent - families Comma-separated list of ColumnFamilies to copy. - all.cells Also copy delete markers and uncollected deleted cells (advanced option). + starttime Beginning of the time range. Without endtime means starttime to forever. + endtime End of the time range. Without endtime means starttime to forever. + versions Number of cell versions to copy. + new.name New table's name. + peer.adr Address of the peer cluster given in the format hbase.zookeeper.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent + families Comma-separated list of ColumnFamilies to copy. + all.cells Also copy delete markers and uncollected deleted cells (advanced option). Args: - tablename Name of table to copy. + tablename Name of table to copy. Example of copying 'TestTable' to a cluster that uses replication for a 1 hour window: @@ -280,7 +280,7 @@ Valid program names are: These generated StoreFiles can be loaded into HBase via .
ImportTsv Options - Running ImportTsv with no arguments prints brief usage information: + Running ImportTsv with no arguments prints brief usage information: Usage: importtsv -Dimporttsv.columns=a,b,c <tablename> <inputdir> @@ -335,7 +335,7 @@ row10 c1 c2
See Also - For more information about bulk-loading HFiles into HBase, see + For more information about bulk-loading HFiles into HBase, see
@@ -352,11 +352,11 @@ row10 c1 c2
CompleteBulkLoad Warning - Data generated via MapReduce is often created with file permissions that are not compatible with the running HBase process. Assuming you're running HDFS with permissions enabled, those permissions will need to be updated before you run CompleteBulkLoad. - + Data generated via MapReduce is often created with file permissions that are not compatible with the running HBase process. Assuming you're running HDFS with permissions enabled, those permissions will need to be updated before you run CompleteBulkLoad. + For more information about bulk-loading HFiles into HBase, see . +
- For more information about bulk-loading HFiles into HBase, see . - +
WALPlayer @@ -384,7 +384,7 @@ row10 c1 c2
RowCounter and CellCounter - RowCounter is a + RowCounter is a mapreduce job to count all the rows of a table. This is a good utility to use as a sanity check to ensure that HBase can read all the blocks of a table if there are any concerns of metadata inconsistency. It will run the mapreduce all in a single process but it will run faster if you have a MapReduce cluster in place for it to exploit. @@ -394,16 +394,16 @@ row10 c1 c2 Note: caching for the input Scan is configured via hbase.client.scanner.caching in the job configuration. HBase ships another diagnostic mapreduce job called - CellCounter. Like + CellCounter. Like RowCounter, it gathers more fine-grained statistics about your table. The statistics gathered by RowCounter are more fine-grained and include: - Total number of rows in the table. - Total number of CFs across all rows. - Total qualifiers across all rows. - Total occurrence of each CF. - Total occurrence of each qualifier. - Total number of versions of each qualifier. + Total number of rows in the table. + Total number of CFs across all rows. + Total qualifiers across all rows. + Total occurrence of each CF. + Total occurrence of each qualifier. + Total number of versions of each qualifier. The program allows you to limit the scope of the run. Provide a row regex or prefix to limit the rows to analyze. Use @@ -797,18 +797,18 @@ false HBase: - See + See OS: - IO Wait - User CPU + IO Wait + User CPU Java: - GC + GC @@ -829,19 +829,19 @@ false -hbase.ipc.warn.response.time Maximum number of milliseconds that a query can be run without being logged. Defaults to 10000, or 10 seconds. Can be set to -1 to disable logging by time. - -hbase.ipc.warn.response.size Maximum byte size of response that a query can return without being logged. Defaults to 100 megabytes. Can be set to -1 to disable logging by size. - +hbase.ipc.warn.response.time Maximum number of milliseconds that a query can be run without being logged. Defaults to 10000, or 10 seconds. Can be set to -1 to disable logging by time. + +hbase.ipc.warn.response.size Maximum byte size of response that a query can return without being logged. Defaults to 100 megabytes. Can be set to -1 to disable logging by size. +
Metrics -The slow query log exposes to metrics to JMX. -hadoop.regionserver_rpc_slowResponse a global metric reflecting the durations of all responses that triggered logging. -hadoop.regionserver_rpc_methodName.aboveOneSec A metric reflecting the durations of all responses that lasted for more than one second. +The slow query log exposes to metrics to JMX. +hadoop.regionserver_rpc_slowResponse a global metric reflecting the durations of all responses that triggered logging. +hadoop.regionserver_rpc_methodName.aboveOneSec A metric reflecting the durations of all responses that lasted for more than one second. - +
Output @@ -1005,19 +1005,18 @@ false
Snapshots operations and ACLs - If you are using security with the AccessController Coprocessor (See ), + If you are using security with the AccessController Coprocessor (See ), only a global administrator can take, clone, or restore a snapshot, and these actions do not capture the ACL rights. This means that restoring a table preserves the ACL rights of the existing table, - while cloning a table creates a new table that has no ACL rights until the administrator adds them. + while cloning a table creates a new table that has no ACL rights until the administrator adds them.
Export to another cluster The ExportSnapshot tool copies all the data related to a snapshot (hfiles, logs, snapshot metadata) to another cluster. The tool executes a Map-Reduce job, similar to distcp, to copy files between the two clusters, - and since it works at file-system level the hbase cluster does not have to be online. + and since it works at file-system level the hbase cluster does not have to be online. To copy a snapshot called MySnapshot to an HBase cluster srv2 (hdfs:///srv2:8082/hbase) using 16 mappers: $ bin/hbase class org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot MySnapshot -copy-to hdfs://srv2:8082/hbase -mappers 16 -
@@ -1027,14 +1026,14 @@ false
Physical data size Physical data size on disk is distinct from logical size of your data and is affected by the following: -Increased by HBase overhead +Increased by HBase overhead -See and . At least 24 bytes per key-value (cell), can be more. Small keys/values means more relative overhead. -KeyValue instances are aggregated into blocks, which are indexed. Indexes also have to be stored. Blocksize is configurable on a per-ColumnFamily basis. See . +See and . At least 24 bytes per key-value (cell), can be more. Small keys/values means more relative overhead. +KeyValue instances are aggregated into blocks, which are indexed. Indexes also have to be stored. Blocksize is configurable on a per-ColumnFamily basis. See . -Decreased by and data block encoding, depending on data. See also this thread. You might want to test what compression and encoding (if any) make sense for your data. -Increased by size of region server (usually fixed and negligible - less than half of RS memory size, per RS). -Increased by HDFS replication - usually x3. +Decreased by and data block encoding, depending on data. See also this thread. You might want to test what compression and encoding (if any) make sense for your data. +Increased by size of region server (usually fixed and negligible - less than half of RS memory size, per RS). +Increased by HDFS replication - usually x3. Aside from the disk space necessary to store the data, one RS may not be able to serve arbitrarily large amounts of data due to some practical limits on region count and size (see ).
@@ -1073,8 +1072,8 @@ E.g. if RS has 16Gb RAM, with default settings, it is 16384*0.4/128 ~ 51 regions
Initial configuration and tuning First, see . Note that some configurations, more than others, depend on specific scenarios. Pay special attention to - - request handler thread count, vital for high-throughput workloads. - - the blocking number of WAL files depends on your memstore configuration and should be set accordingly to prevent potential blocking when doing high volume of writes. + - request handler thread count, vital for high-throughput workloads. + - the blocking number of WAL files depends on your memstore configuration and should be set accordingly to prevent potential blocking when doing high volume of writes. Then, there are some considerations when setting up your cluster and tables.
Compactions diff --git src/main/docbkx/performance.xml src/main/docbkx/performance.xml index 174baf2..9ad8653 100644 --- src/main/docbkx/performance.xml +++ src/main/docbkx/performance.xml @@ -52,9 +52,9 @@ Important items to consider: - Switching capacity of the device - Number of systems connected - Uplink capacity + Switching capacity of the device + Number of systems connected + Uplink capacity
@@ -72,9 +72,9 @@ Mitigation of this issue is fairly simple and can be accomplished in multiple ways: - Use appropriate hardware for the scale of the cluster which you're attempting to build. - Use larger single switch configurations i.e. single 48 port as opposed to 2x 24 port - Configure port trunking for uplinks to utilize multiple interfaces to increase cross switch bandwidth. + Use appropriate hardware for the scale of the cluster which you're attempting to build. + Use larger single switch configurations i.e. single 48 port as opposed to 2x 24 port + Configure port trunking for uplinks to utilize multiple interfaces to increase cross switch bandwidth.
@@ -82,8 +82,8 @@ Multiple Racks Multiple rack configurations carry the same potential issues as multiple switches, and can suffer performance degradation from two main areas: - Poor switch capacity performance - Insufficient uplink to another rack + Poor switch capacity performance + Insufficient uplink to another rack If the the switches in your rack have appropriate switching capacity to handle all the hosts at full speed, the next most likely issue will be caused by homing more of your cluster across racks. The easiest way to avoid issues when spanning multiple racks is to use port trunking to create a bonded uplink to other racks. diff --git src/main/docbkx/schema_design.xml src/main/docbkx/schema_design.xml index 13ebc2f..d5f81c5 100644 --- src/main/docbkx/schema_design.xml +++ src/main/docbkx/schema_design.xml @@ -32,7 +32,7 @@ the various non-rdbms datastores is Ian Varley's Master thesis, No Relation: The Mixed Blessings of Non-Relational Databases. Recommended. Also, read for how HBase stores data internally, and the section on - HBase Schema Design Case Studies. + .
@@ -41,7 +41,7 @@ <para>HBase schemas can be created or updated with <xref linkend="shell" /> or by using <link xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html">HBaseAdmin</link> in the Java API. </para> - <para>Tables must be disabled when making ColumnFamily modifications, for example.. + <para>Tables must be disabled when making ColumnFamily modifications, for example:</para> <programlisting> Configuration config = HBaseConfiguration.create(); HBaseAdmin admin = new HBaseAdmin(conf); @@ -56,7 +56,7 @@ admin.modifyColumn(table, cf2); // modifying existing ColumnFamily admin.enableTable(table); </programlisting> - </para>See <xref linkend="client_dependencies"/> for more information about configuring client connections. + <para>See <xref linkend="client_dependencies"/> for more information about configuring client connections.</para> <para>Note: online schema changes are supported in the 0.92.x codebase, but the 0.90.x codebase requires the table to be disabled. </para> @@ -98,7 +98,7 @@ admin.enableTable(table); Monotonically Increasing Row Keys/Timeseries Data - In the HBase chapter of Tom White's book Hadoop: The Definitive Guide (O'Reilly) there is a an optimization note on watching out for a phenomenon where an import process walks in lock-step with all clients in concert pounding one of the table's regions (and thus, a single node), then moving onto the next region, etc. With monotonically increasing row-keys (i.e., using a timestamp), this will happen. See this comic by IKai Lan on why monotonically increasing row keys are problematic in BigTable-like datastores: + In the HBase chapter of Tom White's book Hadoop: The Definitive Guide (O'Reilly) there is a an optimization note on watching out for a phenomenon where an import process walks in lock-step with all clients in concert pounding one of the table's regions (and thus, a single node), then moving onto the next region, etc. With monotonically increasing row-keys (i.e., using a timestamp), this will happen. See this comic by IKai Lan on why monotonically increasing row keys are problematic in BigTable-like datastores: monotonically increasing values are bad. The pile-up on a single region brought on by monotonically increasing keys can be mitigated by randomizing the input records to not be in sorted order, but in general it's best to avoid using a timestamp or a sequence (e.g. 1, 2, 3) as the row-key. @@ -107,7 +107,7 @@ admin.enableTable(table); successful example. It has a page describing the schema it uses in HBase. The key format in OpenTSDB is effectively [metric_type][event_timestamp], which would appear at first glance to contradict the previous advice about not using a timestamp as the key. However, the difference is that the timestamp is not in the lead position of the key, and the design assumption is that there are dozens or hundreds (or more) of different metric types. Thus, even with a continual stream of input data with a mix of metric types, the Puts are distributed across various points of regions in the table. - See HBase Schema Design Case Studies for some rowkey design examples. + See for some rowkey design examples.
@@ -119,7 +119,7 @@ admin.enableTable(table); are large, especially compared to the size of the cell value, then you may run up against some interesting scenarios. One such is the case described by Marc Limotte at the tail of - HBASE-3551 + HBASE-3551 (recommended!). Therein, the indices that are kept on HBase storefiles () to facilitate random access may end up occupyng large chunks of the HBase @@ -213,7 +213,7 @@ COLUMN CELL The most recent value for [key] in a table can be found by performing a Scan for [key] and obtaining the first record. Since HBase keys are in sorted order, this key sorts before any older row-keys for [key] and thus is first. - This technique would be used instead of using HBase Versioning where the intent is to hold onto all versions + This technique would be used instead of using where the intent is to hold onto all versions "forever" (or a very long time) and at the same time quickly obtain access to any other version by using the same Scan technique.
@@ -336,7 +336,7 @@ public static byte[][] getHexSplits(String startKey, String endKey, int numRegio converted to an array of bytes can be stored as a value. Input could be strings, numbers, complex objects, or even images as long as they can rendered as bytes. There are practical limits to the size of values (e.g., storing 10-50MB objects in HBase would probably be too much to ask); - search the mailling list for conversations on this topic. All rows in HBase conform to the datamodel, and + search the mailling list for conversations on this topic. All rows in HBase conform to the , and that includes versioning. Take that into consideration when making your design, as well as block size for the ColumnFamily.
@@ -388,10 +388,10 @@ public static byte[][] getHexSplits(String startKey, String endKey, int numRegio There is no single answer on the best way to handle this because it depends on... - Number of users - Data size and data arrival rate - Flexibility of reporting requirements (e.g., completely ad-hoc date selection vs. pre-configured ranges) - Desired execution speed of query (e.g., 90 seconds may be reasonable to some for an ad-hoc report, whereas it may be too long for others) + Number of users + Data size and data arrival rate + Flexibility of reporting requirements (e.g., completely ad-hoc date selection vs. pre-configured ranges) + Desired execution speed of query (e.g., 90 seconds may be reasonable to some for an ad-hoc report, whereas it may be too long for others) ... and solutions are also influenced by the size of the cluster and how much processing power you have to throw at the solution. Common techniques are in sub-sections below. This is a comprehensive, but not exhaustive, list of approaches. @@ -453,26 +453,26 @@ public static byte[][] getHexSplits(String startKey, String endKey, int numRegio can be approached. Note: this is just an illustration of potential approaches, not an exhaustive list. Know your data, and know your processing requirements. - It is highly recommended that you read the rest of the Schema Design Chapter first, before reading + It is highly recommended that you read the rest of the first, before reading these case studies. Thee following case studies are described: - Log Data / Timeseries Data - Log Data / Timeseries on Steroids - Customer/Order - Tall/Wide/Middle Schema Design - List Data + Log Data / Timeseries Data + Log Data / Timeseries on Steroids + Customer/Order + Tall/Wide/Middle Schema Design + List Data
Case Study - Log Data and Timeseries Data Assume that the following data elements are being collected. - Hostname - Timestamp - Log event - Value/message + Hostname + Timestamp + Log event + Value/message We can store them in an HBase table called LOG_DATA, but what will the rowkey be? From these attributes the rowkey will be some combination of hostname, timestamp, and log-event - but what specifically? @@ -529,9 +529,9 @@ long bucket = timestamp % numBuckets; Composite Rowkey With Hashes: - [MD5 hash of hostname] = 16 bytes - [MD5 hash of event-type] = 16 bytes - [timestamp] = 8 bytes + [MD5 hash of hostname] = 16 bytes + [MD5 hash of event-type] = 16 bytes + [timestamp] = 8 bytes Composite Rowkey With Numeric Substitution: @@ -539,17 +539,17 @@ long bucket = timestamp % numBuckets; For this approach another lookup table would be needed in addition to LOG_DATA, called LOG_TYPES. The rowkey of LOG_TYPES would be: - [type] (e.g., byte indicating hostname vs. event-type) - [bytes] variable length bytes for raw hostname or event-type. + [type] (e.g., byte indicating hostname vs. event-type) + [bytes] variable length bytes for raw hostname or event-type. A column for this rowkey could be a long with an assigned number, which could be obtained by using an HBase counter. So the resulting composite rowkey would be: - [substituted long for hostname] = 8 bytes - [substituted long for event type] = 8 bytes - [timestamp] = 8 bytes + [substituted long for hostname] = 8 bytes + [substituted long for event type] = 8 bytes + [timestamp] = 8 bytes In either the Hash or Numeric substitution approach, the raw values for hostname and event-type can be stored as columns. @@ -584,19 +584,19 @@ long bucket = timestamp % numBuckets; The Customer record type would include all the things that you’d typically expect: - Customer number - Customer name - Address (e.g., city, state, zip) - Phone numbers, etc. + Customer number + Customer name + Address (e.g., city, state, zip) + Phone numbers, etc. The Order record type would include things like: - Customer number - Order number - Sales date - A series of nested objects for shipping locations and line-items (see - for details) + Customer number + Order number + Sales date + A series of nested objects for shipping locations and line-items (see + for details) Assuming that the combination of customer number and sales order uniquely identify an order, these two attributes will compose @@ -612,14 +612,14 @@ reasonable spread in the keyspace, similar options appear: Composite Rowkey With Hashes: - [MD5 of customer number] = 16 bytes - [MD5 of order number] = 16 bytes + [MD5 of customer number] = 16 bytes + [MD5 of order number] = 16 bytes Composite Numeric/Hash Combo Rowkey: - [substituted long for customer number] = 8 bytes - [MD5 of order number] = 16 bytes + [substituted long for customer number] = 8 bytes + [MD5 of order number] = 16 bytes
@@ -629,15 +629,15 @@ reasonable spread in the keyspace, similar options appear: Customer Record Type Rowkey: - [customer-id] - [type] = type indicating ‘1’ for customer record type + [customer-id] + [type] = type indicating ‘1’ for customer record type Order Record Type Rowkey: - [customer-id] - [type] = type indicating ‘2’ for order record type - [order] + [customer-id] + [type] = type indicating ‘2’ for order record type + [order] The advantage of this particular CUSTOMER++ approach is that organizes many different record-types by customer-id @@ -663,23 +663,23 @@ reasonable spread in the keyspace, similar options appear: The SHIPPING_LOCATION's composite rowkey would be something like this: - [order-rowkey] - [shipping location number] (e.g., 1st location, 2nd, etc.) + [order-rowkey] + [shipping location number] (e.g., 1st location, 2nd, etc.) The LINE_ITEM table's composite rowkey would be something like this: - [order-rowkey] - [shipping location number] (e.g., 1st location, 2nd, etc.) - [line item number] (e.g., 1st lineitem, 2nd, etc.) + [order-rowkey] + [shipping location number] (e.g., 1st location, 2nd, etc.) + [line item number] (e.g., 1st lineitem, 2nd, etc.) Such a normalized model is likely to be the approach with an RDBMS, but that's not your only option with HBase. The cons of such an approach is that to retrieve information about any Order, you will need: - Get on the ORDER table for the Order - Scan on the SHIPPING_LOCATION table for that order to get the ShippingLocation instances - Scan on the LINE_ITEM for each ShippingLocation + Get on the ORDER table for the Order + Scan on the SHIPPING_LOCATION table for that order to get the ShippingLocation instances + Scan on the LINE_ITEM for each ShippingLocation ... granted, this is what an RDBMS would do under the covers anyway, but since there are no joins in HBase you're just more aware of this fact. @@ -691,23 +691,23 @@ reasonable spread in the keyspace, similar options appear: The Order rowkey was described above: - [order-rowkey] - [ORDER record type] + [order-rowkey] + [ORDER record type] The ShippingLocation composite rowkey would be something like this: - [order-rowkey] - [SHIPPING record type] - [shipping location number] (e.g., 1st location, 2nd, etc.) + [order-rowkey] + [SHIPPING record type] + [shipping location number] (e.g., 1st location, 2nd, etc.) The LineItem composite rowkey would be something like this: - [order-rowkey] - [LINE record type] - [shipping location number] (e.g., 1st location, 2nd, etc.) - [line item number] (e.g., 1st lineitem, 2nd, etc.) + [order-rowkey] + [LINE record type] + [shipping location number] (e.g., 1st location, 2nd, etc.) + [line item number] (e.g., 1st lineitem, 2nd, etc.)
@@ -718,21 +718,21 @@ reasonable spread in the keyspace, similar options appear: The LineItem composite rowkey would be something like this: - [order-rowkey] - [LINE record type] - [line item number] (e.g., 1st lineitem, 2nd, etc. - care must be taken that there are unique across the entire order) + [order-rowkey] + [LINE record type] + [line item number] (e.g., 1st lineitem, 2nd, etc. - care must be taken that there are unique across the entire order) ... and the LineItem columns would be something like this: - itemNumber - quantity - price - shipToLine1 (denormalized from ShippingLocation) - shipToLine2 (denormalized from ShippingLocation) - shipToCity (denormalized from ShippingLocation) - shipToState (denormalized from ShippingLocation) - shipToZip (denormalized from ShippingLocation) + itemNumber + quantity + price + shipToLine1 (denormalized from ShippingLocation) + shipToLine2 (denormalized from ShippingLocation) + shipToCity (denormalized from ShippingLocation) + shipToState (denormalized from ShippingLocation) + shipToZip (denormalized from ShippingLocation) The pros of this approach include a less complex object heirarchy, but one of the cons is that updating gets more @@ -786,7 +786,7 @@ reasonable spread in the keyspace, similar options appear: OpenTSDB is the best example of this case where a single row represents a defined time-range, and then discrete events are treated as columns. This approach is often more complex, and may require the additional complexity of re-writing your data, but has the advantage of being I/O efficient. For an overview of this approach, see - . + .
@@ -812,7 +812,7 @@ we could have something like: <FixedWidthUserName><FixedWidthValueId3>:"" (no value) -The other option we had was to do this entirely using: +The other option we had was to do this entirely using: <FixedWidthUserName><FixedWidthPageNum0>:<FixedWidthLength><FixedIdNextPageNum><ValueId1><ValueId2><ValueId3>... <FixedWidthUserName><FixedWidthPageNum1>:<FixedWidthLength><FixedIdNextPageNum><ValueId1><ValueId2><ValueId3>... @@ -824,8 +824,8 @@ So in one case reading the first thirty values would be: scan { STARTROW => 'FixedWidthUsername' LIMIT => 30} -And in the second case it would be - +And in the second case it would be + get 'FixedWidthUserName\x00\x00\x00\x00' diff --git src/main/docbkx/security.xml src/main/docbkx/security.xml index 4797522..1bac920 100644 --- src/main/docbkx/security.xml +++ src/main/docbkx/security.xml @@ -517,7 +517,7 @@ You must configure HBase for secure or simple user access operation. Refer to the Secure Client Access to HBase or - Simple User Access to HBase + Simple User Access to HBase sections and complete all of the steps described there. @@ -779,14 +779,16 @@ Access control mechanisms are mature and fairly standardized in the relational d The HBase shell has been extended to provide simple commands for editing and updating user permissions. The following commands have been added for access control list management: - Grant - - + + + + Grant + grant <user|@group> <permissions> [ <table> [ <column family> [ <column qualifier> ] ] ] - - + + - <user|@group> is user or group (start with character '@'), Groups are created and manipulated via the Hadoop group mapping service. + <user|@group> is user or group (start with character '@'), Groups are created and manipulated via the Hadoop group mapping service. <permissions> is zero or more letters from the set "RWCA": READ('R'), WRITE('W'), CREATE('C'), ADMIN('A'). @@ -794,32 +796,32 @@ The HBase shell has been extended to provide simple commands for editing and upd Note: Grants and revocations of individual permissions on a resource are both accomplished using the grant command. A separate revoke command is also provided by the shell, but this is for fast revocation of all of a user's access rights to a given resource only. - - Revoke - - + + Revoke + revoke <user|@group> [ <table> [ <column family> [ <column qualifier> ] ] ] - - - Alter - + + + Alter + - The alter command has been extended to allow ownership assignment: + The alter command has been extended to allow ownership assignment: alter 'tablename', {OWNER => 'username|@group'} - - - User Permission - + + + + User Permission + - The user_permission command shows all access permissions for the current user for a given table: + The user_permission command shows all access permissions for the current user for a given table: user_permission <table> - +
@@ -829,12 +831,12 @@ The HBase shell has been extended to provide simple commands for editing and upd Bulk loading in secure mode is a bit more involved than normal setup, since the client has to transfer the ownership of the files generated from the mapreduce job to HBase. Secure bulk loading is implemented by a coprocessor, named SecureBulkLoadEndpoint. SecureBulkLoadEndpoint uses a staging directory "hbase.bulkload.staging.dir", which defaults to /tmp/hbase-staging/. The algorithm is as follows. - Create an hbase owned staging directory which is world traversable (-rwx--x--x, 711) /tmp/hbase-staging. - A user writes out data to his secure output directory: /user/foo/data - A call is made to hbase to create a secret staging directory - which is globally readable/writable (-rwxrwxrwx, 777): /tmp/hbase-staging/averylongandrandomdirectoryname - The user makes the data world readable and writable, then moves it - into the random staging directory, then calls bulkLoadHFiles() + Create an hbase owned staging directory which is world traversable (-rwx--x--x, 711) /tmp/hbase-staging. + A user writes out data to his secure output directory: /user/foo/data + A call is made to hbase to create a secret staging directory + which is globally readable/writable (-rwxrwxrwx, 777): /tmp/hbase-staging/averylongandrandomdirectoryname + The user makes the data world readable and writable, then moves it + into the random staging directory, then calls bulkLoadHFiles() @@ -870,9 +872,9 @@ The HBase shell has been extended to provide simple commands for editing and upd Visibility expressions like the above can be added when storing or mutating a cell using the API, - Mutation#setCellVisibility(new CellVisibility(String labelExpession)); - Where the labelExpression could be '( secret | topsecret ) & !probationary' - + Mutation#setCellVisibility(new CellVisibility(String labelExpession)); + Where the labelExpression could be '( secret | topsecret ) & !probationary' + We build the user's label set in the RPC context when a request is first received by the HBase RegionServer. How users are associated with labels is pluggable. The default plugin passes through labels specified in Authorizations added to the Get or Scan and checks those against the calling user's authenticated labels list. When client passes some labels for which the user is not authenticated, this default algorithm will drop those. One can pass a subset of user authenticated labels via the Scan/Get authorizations. @@ -896,7 +898,7 @@ The HBase shell has been extended to provide simple commands for editing and upd
-
+
User Label Association A set of labels can be associated with a user by using the API VisibilityClient#setAuths(Configuration conf, final String[] auths, final String user) diff --git src/main/docbkx/troubleshooting.xml src/main/docbkx/troubleshooting.xml index d9b7009..1a71d47 100644 --- src/main/docbkx/troubleshooting.xml +++ src/main/docbkx/troubleshooting.xml @@ -235,7 +235,7 @@ export SERVER_GC_OPTS="$SERVER_GC_OPTS -XX:NewSize=64m -XX:MaxNewSize=64m" is generally used for questions on released versions of Apache HBase. Before going to the mailing list, make sure your question has not already been answered by searching the mailing list archives first. Use . - Take some time crafting your questionSee Getting Answers; a quality question that includes all context and + Take some time crafting your questionSee Getting Answers; a quality question that includes all context and exhibits evidence the author has tried to find answers in the manual and out on lists is more likely to get a prompt response. @@ -360,15 +360,15 @@ hadoop@sv4borg12:~$ jps In order, we see a: - Hadoop TaskTracker, manages the local Childs - HBase RegionServer, serves regions - Child, its MapReduce task, cannot tell which type exactly - Hadoop TaskTracker, manages the local Childs - Hadoop DataNode, serves blocks - HQuorumPeer, a ZooKeeper ensemble member - Jps, well… it’s the current process - ThriftServer, it’s a special one will be running only if thrift was started - jmx, this is a local process that’s part of our monitoring platform ( poorly named maybe). You probably don’t have that. + Hadoop TaskTracker, manages the local Childs + HBase RegionServer, serves regions + Child, its MapReduce task, cannot tell which type exactly + Hadoop TaskTracker, manages the local Childs + Hadoop DataNode, serves blocks + HQuorumPeer, a ZooKeeper ensemble member + Jps, well… it’s the current process + ThriftServer, it’s a special one will be running only if thrift was started + jmx, this is a local process that’s part of our monitoring platform ( poorly named maybe). You probably don’t have that. @@ -622,8 +622,8 @@ Harsh J investigated the issue as part of the mailing list thread Client running out of memory though heap size seems to be stable (but the off-heap/direct heap keeps growing) You are likely running into the issue that is described and worked through in -the mail thread HBase, mail # user - Suspected memory leak -and continued over in HBase, mail # dev - FeedbackRe: Suspected memory leak. +the mail thread HBase, mail # user - Suspected memory leak +and continued over in HBase, mail # dev - FeedbackRe: Suspected memory leak. A workaround is passing your client-side JVM a reasonable value for -XX:MaxDirectMemorySize. By default, the MaxDirectMemorySize is equal to your -Xmx max heapsize setting (if -Xmx is set). Try seting it to something smaller (for example, one user had success setting it to 1g when @@ -748,7 +748,7 @@ Caused by: java.io.FileNotFoundException: File _partition.lst does not exist. /<HLog> (WAL HLog files for the RegionServer) - See the HDFS User Guide for other non-shell diagnostic + See the HDFS User Guide for other non-shell diagnostic utilities like fsck.
@@ -924,25 +924,26 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi Since the RegionServer's local ZooKeeper client cannot send heartbeats, the session times out. By design, we shut down any node that isn't able to contact the ZooKeeper ensemble after getting a timeout so that it stops serving data that may already be assigned elsewhere. - + - Make sure you give plenty of RAM (in hbase-env.sh), the default of 1GB won't be able to sustain long running imports. - Make sure you don't swap, the JVM never behaves well under swapping. - Make sure you are not CPU starving the RegionServer thread. For example, if you are running a MapReduce job using 6 CPU-intensive tasks on a machine with 4 cores, you are probably starving the RegionServer enough to create longer garbage collection pauses. - Increase the ZooKeeper session timeout + Make sure you give plenty of RAM (in hbase-env.sh), the default of 1GB won't be able to sustain long running imports. + Make sure you don't swap, the JVM never behaves well under swapping. + Make sure you are not CPU starving the RegionServer thread. For example, if you are running a MapReduce job using 6 CPU-intensive tasks on a machine with 4 cores, you are probably starving the RegionServer enough to create longer garbage collection pauses. + Increase the ZooKeeper session timeout - If you wish to increase the session timeout, add the following to your hbase-site.xml to increase the timeout from the default of 60 seconds to 120 seconds. - -<property> - <name>zookeeper.session.timeout</name> - <value>1200000</value> -</property> -<property> - <name>hbase.zookeeper.property.tickTime</name> - <value>6000</value> -</property> + If you wish to increase the session timeout, add the following to your hbase-site.xml to increase the timeout from the default of 60 seconds to 120 seconds. + + + + zookeeper.session.timeout + 1200000 + + + hbase.zookeeper.property.tickTime + 6000 +]]> - + Be aware that setting a higher timeout means that the regions served by a failed RegionServer will take at least that amount of time to be transfered to another RegionServer. For a production system serving live requests, we would instead @@ -952,8 +953,8 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi If this is happening during an upload which only happens once (like initially loading all your data into HBase), consider bulk loading. - See for other general information about ZooKeeper troubleshooting. -
+See for other general information about ZooKeeper troubleshooting. +
NotServingRegionException This exception is "normal" when found in the RegionServer logs at DEBUG level. This exception is returned back to the client @@ -968,7 +969,7 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi RegionServer is not using the name given it by the master; double entry in master listing of servers for gorey details.
-
+
Logs flooded with '2011-01-10 12:40:48,407 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor' messages We are not using the native versions of compression @@ -990,7 +991,7 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi
Shutdown Errors - +
@@ -1018,7 +1019,7 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi
Shutdown Errors - +
diff --git src/main/docbkx/upgrading.xml src/main/docbkx/upgrading.xml index 22324a3..642ad67 100644 --- src/main/docbkx/upgrading.xml +++ src/main/docbkx/upgrading.xml @@ -226,8 +226,8 @@ Now start up hbase-0.96.0.
-
Troubleshooting -
Old Client connecting to 0.96 cluster +
Troubleshooting +
Old Client connecting to 0.96 cluster It will fail with an exception like the below. Upgrade. 17:22:15 Exception in thread "main" java.lang.IllegalArgumentException: Not a host:port pair: PBUF 17:22:15 * @@ -267,20 +267,20 @@ If you've not patience, here are the important things to know upgrading. -Once you upgrade, you can’t go back. +Once you upgrade, you can’t go back. - -MSLAB is on by default. Watch that heap usage if you have a lot of regions. + +MSLAB is on by default. Watch that heap usage if you have a lot of regions. - + Distributed splitting is on by defaul. It should make region server failover faster. - - + + There’s a separate tarball for security. - - + + If -XX:MaxDirectMemorySize is set in your hbase-env.sh, it’s going to enable the experimental off-heap cache (You may not want this). - + @@ -302,7 +302,7 @@ This means you cannot go back to 0.90.x once you’ve started HBase 0.92.0 over In 0.92.0, the hbase.hregion.memstore.mslab.enabled flag is set to true (See ). In 0.90.x it was false. When it is enabled, memstores will step allocate memory in MSLAB 2MB chunks even if the memstore has zero or just a few small elements. This is fine usually but if you had lots of regions per regionserver in a 0.90.x cluster (and MSLAB was off), -you may find yourself OOME'ing on upgrade because the thousands of regions * number of column families * 2MB MSLAB (at a minimum) +you may find yourself OOME'ing on upgrade because the thousands of regions * number of column families * 2MB MSLAB (at a minimum) puts your heap over the top. Set hbase.hregion.memstore.mslab.enabled to false or set the MSLAB size down from 2MB by setting hbase.hregion.memstore.mslab.chunksize to something less. diff --git src/main/docbkx/zookeeper.xml src/main/docbkx/zookeeper.xml index 4a257b5..d4d8774 100644 --- src/main/docbkx/zookeeper.xml +++ src/main/docbkx/zookeeper.xml @@ -219,7 +219,7 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper standalone Zookeeper quorum) for ease of learning. -
Operating System Prerequisites
+
Operating System Prerequisites You need to have a working Kerberos KDC setup. For @@ -283,7 +283,7 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper We'll refer to this JAAS configuration file as $CLIENT_CONF below. - +
HBase-managed Zookeeper Configuration @@ -312,10 +312,10 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper }; - where the $PATH_TO_HBASE_KEYTAB and + where the $PATH_TO_HBASE_KEYTAB and $PATH_TO_ZOOKEEPER_KEYTAB files are what you created above, and $HOST is the hostname for that - node. + node. The Server section will be used by the Zookeeper quorum server, while the @@ -342,9 +342,9 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF" - where $HBASE_SERVER_CONF and + where $HBASE_SERVER_CONF and $CLIENT_CONF are the full paths to the - JAAS configuration files created above. + JAAS configuration files created above. Modify your hbase-site.xml on each node that will run zookeeper, master or regionserver to contain: @@ -537,10 +537,10 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
Configuration from Scratch - This has been tested on the current standard Amazon + This has been tested on the current standard Amazon Linux AMI. First setup KDC and principals as described above. Next checkout code and run a sanity - check. + check. git clone git://git.apache.org/hbase.git @@ -548,9 +548,9 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper mvn clean test -Dtest=TestZooKeeperACL - Then configure HBase as described above. - Manually edit target/cached_classpath.txt (see below).. - + Then configure HBase as described above. + Manually edit target/cached_classpath.txt (see below): + bin/hbase zookeeper & bin/hbase master & @@ -582,14 +582,16 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper programmatically - This would avoid the need for a separate Hadoop jar + This would avoid the need for a separate Hadoop jar that fixes HADOOP-7070. +
Elimination of <code>kerberos.removeHostFromPrincipal</code> and <code>kerberos.removeRealmFromPrincipal</code> +