diff --git a/src/main/javadoc/overview.html b/src/main/javadoc/overview.html
index 968ec63..f2a3a17 100644
--- a/src/main/javadoc/overview.html
+++ b/src/main/javadoc/overview.html
@@ -54,7 +54,13 @@
- Java 1.6.x, preferably from Sun. Use the latest version available except u18 (u19 is fine).
- - This version of HBase will only run on Hadoop 0.20.x.
+ - This version of HBase will only run on Hadoop 0.20.x.
+ HBase will lose data unless it is running on an HDFS that has a durable sync operation.
+ Currently only the 0.20-append
+ branch has this attribute. No official releases have been made from this branch as of this writing
+ so you will have to build your own Hadoop from the tip of this branch
+ (or install Cloudera's CDH3b2
+ when its available; it will have a durable sync).
-
ssh must be installed and sshd must be running to use Hadoop's scripts to manage remote Hadoop daemons.
You must be able to ssh to all nodes, including your local node, using passwordless login
@@ -77,22 +83,7 @@
on your cluster, or an equivalent.
-
- This is the current list of patches we recommend you apply to your running Hadoop cluster:
-
-
- -
- HBase is a database, it uses a lot of files at the same time. The default ulimit -n of 1024 on *nix systems is insufficient.
+ The default ulimit -n of 1024 on *nix systems will be insufficient.
Any significant amount of loading will lead you to
FAQ: Why do I see "java.io.IOException...(Too many open files)" in my logs?.
You will also notice errors like:
@@ -100,8 +91,9 @@
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
- Do yourself a favor and change this to more than 10k using the FAQ.
- Also, HDFS has an upper bound of files that it can serve at the same time, called xcievers (yes, this is misspelled). Again, before doing any loading,
+ Do yourself a favor and change this to more than 10k. See the FAQ in the hbase wiki for how.
+ Also, HDFS has an upper bound of files that it can serve at the same time,
+ called xcievers (yes, this is misspelled). Again, before doing any loading,
make sure you configured Hadoop's conf/hdfs-site.xml with this:
<property>