Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-22618

added the possibility to load custom cost functions

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.0.0-alpha-1, 2.2.0, 2.2.1, 2.1.6, 1.4.11, 2.1.7
    • 3.0.0-alpha-1, 1.5.0, 2.3.0
    • None
    • None
    • Hide
      <!-- markdown -->
      Extends `StochasticLoadBalancer` to support user-provided cost function. These are loaded in addition to the default set of cost functions. Custom function implementations must extend `StochasticLoadBalancer$CostFunction`. Enable any additional functions by placing them on the master class path and configuring `hbase.master.balancer.stochastic.additionalCostFunctions` with a comma-separated list of fully-qualified class names.
      Show
      <!-- markdown --> Extends `StochasticLoadBalancer` to support user-provided cost function. These are loaded in addition to the default set of cost functions. Custom function implementations must extend `StochasticLoadBalancer$CostFunction`. Enable any additional functions by placing them on the master class path and configuring `hbase.master.balancer.stochastic.additionalCostFunctions` with a comma-separated list of fully-qualified class names.

    Description

      Hi,

      We wouls like to open the discussion about bringing the possibility to have regions deployed on Heterogeneous deployment, i.e Hbase cluster running different kind of hardware.

      Why?

      • Cloud deployments means that we may not be able to have the same hardware throughout the years
      • Some tables may need special requirements such as SSD whereas others should be using hard-drives
      •  in our usecase(single table, dedicated HBase and Hadoop tuned for our usecase, good key distribution), the number of regions per RS was the real limit for us.

      Our usecase

      We found out that in our usecase(single table, dedicated HBase and Hadoop tuned for our usecase, good key distribution), the number of regions per RS was the real limit for us.

      Over the years, due to historical reasons and also the need to benchmark new machines, we ended-up with differents groups of hardware: some servers can handle only 180 regions, whereas the biggest can handle more than 900. Because of such a difference, we had to disable the LoadBalancing to avoid the roundRobinAssigmnent. We developed some internal tooling which are responsible for load balancing regions across RegionServers. That was 1.5 year ago.

      Our Proof-of-concept

      We did work on a Proof-of-concept here, and some early tests herehere, and here. We wrote the balancer for our use-case, which means that:

      • there is one table
      • there is no region-replica
      • good key dispersion
      • there is no regions on master

      A rule file is loaded before balancing. It contains lines of rules. A rule is composed of a regexp for hostname, and a limit. For example, we could have:

       

      rs[0-9] 200

      rs1[0-9] 50

       

      RegionServers with hostname matching the first rules will have a limit of 200, and the others 50. If there's no match, a default is set.

      Thanks to the rule, we have two informations: the max number of regions for this cluster, and the rules for each servers. HeterogeneousBalancer will try to balance regions according to their capacity.

      Let's take an example. Let's say that we have 20 RS:

      • 10 RS, named through rs0 to rs9 loaded with 60 regions each, and each can handle 200 regions.
      • 10 RS, named through rs10 to rs19 loaded with 60 regions each, and each can support 50 regions.

      Based on the following rules:

       

      rs[0-9] 200

      rs1[0-9] 50

       

      The second group is overloaded, whereas the first group has plenty of space.

      We know that we can handle at maximum 2500 regions (200*10 + 50*10) and we have currently 1200 regions (60*20). HeterogeneousBalancer will understand that the cluster is full at 48.0% (1200/2500). Based on this information, we will then try to put all the RegionServers to ~48% of load according to the rules. In this case, it will move regions from the second group to the first.

      The balancer will:

      • compute how many regions needs to be moved. In our example, by moving 36 regions on rs10, we could go from 120.0% to 46.0%
      • select regions with lowest data-locality
      • try to find an appropriate RS for the region. We will take the lowest available RS.

      Other implementations and ideas

      Clay Baenziger proposed this idea on the dev ML:

      Could it work to have the stochastic load balancer use pluggable cost functions instead of this static list of cost functions? Then, could this type of a load balancer be implemented simply as a new cost function which folks could choose to load and mix with the others?

      I think this could be an interesting way to include user-functions in the mix. As you know your hardawre and the pattern access, you can easily know which metrics is important for balancing, for us, it will only be the number of regions, but we could mix-it with the incoming writes!

       

      bhupendra.jain proposed also the ideas of "labels"

       

      Internally, we are also having discussion to develop similar solution. In our approach, We were also thinking of adding "RS Label" Feature similar to Hadoop Node Label feature. 

      Each RS can have a label to denote its capabilities / resources . When user create table, there can be extra attributes with its descriptor. The balancer can decide to host region of table based on RS label and these attributes further.  
      With RS label feature, Balancer can be more intelligent.  Example tables with high read load needs more cache backed by SSDs , So such table regions should be hosted on RS having SSDs ... 

      I love the idea, but I think Clay's idea is better for a better and faster first set of commits on the subject! What do you think?

      Attachments

        1. HBASE-22618.master.001.patch
          14 kB
          Wellington Chevreuil
        2. HBASE-22618.branch-1.001.patch
          13 kB
          Pierre Zemb
        3. HBASE-22618.branch-2.001.patch
          13 kB
          Pierre Zemb
        4. HBASE-22618.branch-1.002.patch
          13 kB
          Wellington Chevreuil

        Activity

          People

            PierreZ Pierre Zemb
            PierreZ Pierre Zemb
            Votes:
            0 Vote for this issue
            Watchers:
            17 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: