after weeks testing hbase 0.1.3 and hadoop(0.16.4, 0.17.1), i found there are many works to do, before a particular regionserver can handle data about 100G, or even more. i'd share my opions here with stack, and other developers.
first, the easiest way improving scalability of regionserver is upgrading hardware, use 64bit os and 8G memory for the regionserver process, and speed up disk io.
besides hardware, following are software bottlenecks i found in regionserver:
1. as data increasing, compaction was eating cpu(with io) times, the total compaction time is basicly linear relative to whole data size, even worse, sometimes square relavtive to that size.
2. memory usage are depends on opened mapfiles
3. network connection are depends on opened mapfiles, see
HADOOP-2341 and HBASE-24.