Affects Version/s: None
Release Note:Fixes performance regression in increment/append and checkAnd* operations in hbase-1.0.x and hbase-1.1.x. This fix is not needed in hbase-1.2 and on up. They have
HBASE-12751which does effectively the same thing.
This is an attempt to fix the increment performance regression caused by
HBASE-8763 on branch-1.0.
I'm aware that hbase.increment.fast.but.narrow.consistency was added to branch-1.0 (
HBASE-15031) to address the issue and a separate work is ongoing on master branch, but anyway, this is my take on the problem.
- Server: 4-core Xeon 2.4GHz Linux server running mini cluster (100 handlers, JDK 1.7)
- Client: Another box of the same spec
- Increments on random 10k records on a single-region table, recreated every time
Increment throughput (TPS):
|Num threads|| Before
We can clearly observe that the throughtput degrades as we increase the number of concurrent requests, which led me to believe that there's severe context switching overhead and I could indirectly confirm that suspicion with cs entry in vmstat output. branch-1.0 shows a much higher number of context switches even with much lower throughput.
Here are the observations:
- WriteEntry in the writeQueue can only be removed by the very handler that put it, only when it is at the front of the queue and marked complete.
- Since a WriteEntry is marked complete after the wait-loop, only one entry can be removed at a time.
- This stringent condition causes O(N^2) context switches where n is the number of concurrent handlers processing requests.
So what I tried here is to mark WriteEntry complete before we go into wait-loop. With the change, multiple WriteEntries can be shifted at a time without context switches. I changed writeQueue to LinkedHashSet since fast containment check is needed as WriteEntry can be removed by any handler.
The numbers look good, it's virtually identical to pre-
|Num threads||branch-1.0 with fix|
So what do you think about it? Please let me know if I'm missing anything.