Description
Currently CostFunctions including RegionCountSkewCostFunctions, PrimaryRegionCountSkewCostFunctions and all load cost functions calculate the unevenness of the distribution by getting the sum of deviation per region server. This simple implementation works when the cluster is small. But when the cluster get larger with more region servers and regions, it doesn't work well with hot spots or a small number of unbalanced servers. The proposal is to use the standard deviation of the count per region server to capture the existence of a small portion of region servers with overwhelming load/allocation.
TableSkewCostFunction uses the sum of the max deviation region per server for all tables as the measure of unevenness. It doesn't work in a very common scenario in operations. Say we have 100 regions on 50 nodes, two on each. We add 50 new nodes and they have 0 each. The max deviation from the mean is 1, compared to 99 in the worst case scenario of 100 regions on a single server. The normalized cost is 1/99 = 0.011 < default threshold of 0.05. Balancer wouldn't move. The proposal is to use the standard deviation of the count per region server to detect this scenario, generating a cost of 3.1/31 = 0.1 in this case.
Patch is in test and will follow shortly.
Attachments
Attachments
Issue Links
- is a child of
-
HBASE-25697 StochasticBalancer improvement for large scale clusters
- Open
- is related to
-
HBASE-26309 Balancer tends to move regions to the server at the end of list
- Resolved
- relates to
-
HBASE-27302 Adding a trigger for Stochastica Balancer to safeguard for upper bound outliers.
- Open
- links to
1.
|
TableSkewCostFunction need to use aggregated deviation | Resolved | Clara Xiong | |
2.
|
Update default weight of cost functions | Open | Unassigned | |
3.
|
tableSkewCostFunction aggregate cost per table incorrectly | Resolved | Unassigned |