Index: src/docbkx/book.xml
===================================================================
--- src/docbkx/book.xml (revision 1209994)
+++ src/docbkx/book.xml (working copy)
@@ -1938,119 +1938,11 @@
-
-
- Compression In HBaseCompression
-
-
- CompressionTest Tool
-
- HBase includes a tool to test compression is set up properly.
- To run it, type /bin/hbase org.apache.hadoop.hbase.util.CompressionTest.
- This will emit usage on how to run the tool.
-
-
-
-
-
-
- hbase.regionserver.codecs
-
-
-
- To have a RegionServer test a set of codecs and fail-to-start if any
- code is missing or misinstalled, add the configuration
-
- hbase.regionserver.codecs
-
- to your hbase-site.xml with a value of
- codecs to test on startup. For example if the
-
- hbase.regionserver.codecs
- value is lzo,gz and if lzo is not present
- or improperly installed, the misconfigured RegionServer will fail
- to start.
-
-
- Administrators might make use of this facility to guard against
- the case where a new server is added to cluster but the cluster
- requires install of a particular coded.
-
-
-
-
-
- LZO
-
- Unfortunately, HBase cannot ship with LZO because of
- the licensing issues; HBase is Apache-licensed, LZO is GPL.
- Therefore LZO install is to be done post-HBase install.
- See the Using LZO Compression
- wiki page for how to make LZO work with HBase.
-
- A common problem users run into when using LZO is that while initial
- setup of the cluster runs smooth, a month goes by and some sysadmin goes to
- add a machine to the cluster only they'll have forgotten to do the LZO
- fixup on the new machine. In versions since HBase 0.90.0, we should
- fail in a way that makes it plain what the problem is, but maybe not.
- See
- for a feature to help protect against failed LZO install.
-
-
-
-
- GZIP
-
-
- GZIP will generally compress better than LZO though slower.
- For some setups, better compression may be preferred.
- Java will use java's GZIP unless the native Hadoop libs are
- available on the CLASSPATH; in this case it will use native
- compressors instead (If the native libs are NOT present,
- you will see lots of Got brand-new compressor
- reports in your logs; see ).
-
-
-
-
- SNAPPY
-
-
- If snappy is installed, HBase can make use of it (courtesy of
- hadoop-snappy
- See Alejandro's note up on the list on difference between Snappy in Hadoop
- and Snappy in HBase).
-
-
-
-
- Build and install snappy on all nodes
- of your cluster.
-
-
-
-
- Use CompressionTest to verify snappy support is enabled and the libs can be loaded ON ALL NODES of your cluster:
- $ hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://host/path/to/hbase snappy
-
-
-
-
- Create a column family with snappy compression and verify it in the hbase shell:
- $ hbase> create 't1', { NAME => 'cf1', COMPRESSION => 'SNAPPY' }
-hbase> describe 't1'
- In the output of the "describe" command, you need to ensure it lists "COMPRESSION => 'SNAPPY'"
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+FAQGeneral
@@ -2223,21 +2115,131 @@
- Building HBase
+ HBase in Action
-
-When I build, why do I always get Unable to find resource 'VM_global_library.vm'?
-
+ Where can I find interesting videos and presentations on HBase?
- Ignore it. Its not an error. It is officially ugly though.
+ See
-
+
+
+
+ Compression In HBaseCompression
+
+
+ CompressionTest Tool
+
+ HBase includes a tool to test compression is set up properly.
+ To run it, type /bin/hbase org.apache.hadoop.hbase.util.CompressionTest.
+ This will emit usage on how to run the tool.
+
+
+
+
+
+
+ hbase.regionserver.codecs
+
+
+
+ To have a RegionServer test a set of codecs and fail-to-start if any
+ code is missing or misinstalled, add the configuration
+
+ hbase.regionserver.codecs
+
+ to your hbase-site.xml with a value of
+ codecs to test on startup. For example if the
+
+ hbase.regionserver.codecs
+ value is lzo,gz and if lzo is not present
+ or improperly installed, the misconfigured RegionServer will fail
+ to start.
+
+
+ Administrators might make use of this facility to guard against
+ the case where a new server is added to cluster but the cluster
+ requires install of a particular coded.
+
+
+
+
+
+ LZO
+
+ Unfortunately, HBase cannot ship with LZO because of
+ the licensing issues; HBase is Apache-licensed, LZO is GPL.
+ Therefore LZO install is to be done post-HBase install.
+ See the Using LZO Compression
+ wiki page for how to make LZO work with HBase.
+
+ A common problem users run into when using LZO is that while initial
+ setup of the cluster runs smooth, a month goes by and some sysadmin goes to
+ add a machine to the cluster only they'll have forgotten to do the LZO
+ fixup on the new machine. In versions since HBase 0.90.0, we should
+ fail in a way that makes it plain what the problem is, but maybe not.
+ See
+ for a feature to help protect against failed LZO install.
+
+
+
+
+ GZIP
+
+
+ GZIP will generally compress better than LZO though slower.
+ For some setups, better compression may be preferred.
+ Java will use java's GZIP unless the native Hadoop libs are
+ available on the CLASSPATH; in this case it will use native
+ compressors instead (If the native libs are NOT present,
+ you will see lots of Got brand-new compressor
+ reports in your logs; see ).
+
+
+
+
+ SNAPPY
+
+
+ If snappy is installed, HBase can make use of it (courtesy of
+ hadoop-snappy
+ See Alejandro's note up on the list on difference between Snappy in Hadoop
+ and Snappy in HBase).
+
+
+
+
+ Build and install snappy on all nodes
+ of your cluster.
+
+
+
+
+ Use CompressionTest to verify snappy support is enabled and the libs can be loaded ON ALL NODES of your cluster:
+ $ hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://host/path/to/hbase snappy
+
+
+
+
+ Create a column family with snappy compression and verify it in the hbase shell:
+ $ hbase> create 't1', { NAME => 'cf1', COMPRESSION => 'SNAPPY' }
+hbase> describe 't1'
+ In the output of the "describe" command, you need to ensure it lists "COMPRESSION => 'SNAPPY'"
+
+
+
+
+
+
+
+
+
+
YCSB: The Yahoo! Cloud Serving Benchmark and HBaseTODO: Describe how YCSB is poor for putting up a decent cluster load.
@@ -2246,7 +2248,6 @@
-
HFile format version 2
@@ -2710,9 +2711,34 @@
+
+ Other Information about HBase
+ HBase Videos
+ Introduction to HBase by Todd Lipcon.
+
+ Building Real Time Services at Facebook with HBase by Jonathan Gray.
+
+
+ Sites
+ Cloudera's HBase Blog has a lot of links to useful HBase information.
+
+ CAP Confusion is a relevant presentation for background information on
+ distributed storage systems.
+
+
+
+ HBase Wiki has a page with a number of presentations.
+
+
+ Books
+ HBase: The Definitive Guide by Lars George.
+
+
+
+
HBase and the Apache Software FoundationHBase is a project in the Apache Software Foundation and as such there are responsibilities to the ASF to ensure
- a healthy project.
+ a healthy project.ASF Development ProcessSee the Apache Development Process page
for all sorts of information on how the ASF is structured (e.g., PMC, committers, contributors), to tips on contributing
@@ -2724,7 +2750,6 @@
lead and the committers. See ASF board reporting for more information.
-
Index: src/docbkx/developer.xml
===================================================================
--- src/docbkx/developer.xml (revision 1209994)
+++ src/docbkx/developer.xml (working copy)
@@ -160,6 +160,11 @@
[INFO] -----------------------------------------------------------------------
+ Build Gotchas
+ If you see Unable to find resource 'VM_global_library.vm', ignore it.
+ Its not an error. It is officially ugly though.
+
+
Index: src/docbkx/troubleshooting.xml
===================================================================
--- src/docbkx/troubleshooting.xml (revision 1209994)
+++ src/docbkx/troubleshooting.xml (working copy)
@@ -211,6 +211,10 @@
search-hadoop.com indexes all the mailing lists and is great for historical searches.
+
+ IRC
+ #hbase on irc.freenode.net
+ JIRA