HBase
  1. HBase
  2. HBASE-1017

Region balancing does not bring newly added node within acceptable range

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 0.19.0
    • Fix Version/s: 0.20.0
    • Component/s: None
    • Labels:
      None

      Description

      With a 10 node cluster, there were only 9 online nodes. With about 215 total regions, each of the 9 had around 24 regions (average load is 24). Slop is 10% so 22 to 26 is the acceptable range.

      Starting up the 10th node, master log showed:

      2008-11-21 15:57:51,521 INFO org.apache.hadoop.hbase.master.ServerManager: Received start message from: 72.34.249.210:60020
      2008-11-21 15:57:53,351 DEBUG org.apache.hadoop.hbase.master.RegionManager: Server 72.34.249.219:60020 is overloaded. Server load: 25 avg: 22.0, slop: 0.1
      2008-11-21 15:57:53,351 DEBUG org.apache.hadoop.hbase.master.RegionManager: Choosing to reassign 3 regions. mostLoadedRegions has 10 regions in it.
      2008-11-21 15:57:53,351 DEBUG org.apache.hadoop.hbase.master.RegionManager: Going to close region streamitems,^@^@^@^@^AH�;,1225411051632
      2008-11-21 15:57:53,351 DEBUG org.apache.hadoop.hbase.master.RegionManager: Going to close region streamitems,^@^@^@^@^@�Ý,1225411056686
      2008-11-21 15:57:53,351 DEBUG org.apache.hadoop.hbase.master.RegionManager: Going to close region groups,,1222913580957
      2008-11-21 15:57:53,975 DEBUG org.apache.hadoop.hbase.master.RegionManager: Server 72.34.249.213:60020 is overloaded. Server load: 25 avg: 22.0, slop: 0.1
      2008-11-21 15:57:53,975 DEBUG org.apache.hadoop.hbase.master.RegionManager: Choosing to reassign 3 regions. mostLoadedRegions has 10 regions in it.
      2008-11-21 15:57:53,976 DEBUG org.apache.hadoop.hbase.master.RegionManager: Going to close region upgrade,,1226892014784
      2008-11-21 15:57:53,976 DEBUG org.apache.hadoop.hbase.master.RegionManager: Going to close region streamitems,^@^@^@^@^@3^Z�,1225411056701
      2008-11-21 15:57:53,976 DEBUG org.apache.hadoop.hbase.master.RegionManager: Going to close region streamitems,^@^@^@^@^@         ^L,1225411049042
      

      The new regionserver received only 6 regions. This happened because when the 10th came in, average load dropped to 22. This caused two servers with 25 regions (acceptable when avg was 24 but not now) to reassign 3 of their regions each to bring them back down to the average. Unfortunately all other regions remained within the 10% slop (20 to 24) so they were not overloaded and thus did not reassign off any regions. It was only chance that made even 6 of the regions get reassigned as there could have been exactly 24 on each server, in which case none would have been assigned to the new node.

      This will behave worse on larger clusters when adding a new node has little impact on the avg load/server.

      1. HBASE-1017_v1.patch
        4 kB
        Evgeny Ryabitskiy
      2. HBASE-1017_v2.patch
        26 kB
        Evgeny Ryabitskiy
      3. HBASE-1017_v4.patch
        34 kB
        Evgeny Ryabitskiy
      4. HBASE-1017_v5.patch
        37 kB
        Evgeny Ryabitskiy
      5. HBASE-1017_v6.patch
        35 kB
        Evgeny Ryabitskiy
      6. HBASE-1017_v7.patch
        35 kB
        Evgeny Ryabitskiy
      7. HBASE-1017_v8.patch
        35 kB
        Evgeny Ryabitskiy
      8. HBASE-1017_v9.patch
        37 kB
        Evgeny Ryabitskiy
      9. HBASE-1017_v10.patch
        37 kB
        Evgeny Ryabitskiy
      10. loadbalance2.0.patch
        23 kB
        Evgeny Ryabitskiy
      11. HBASE-1017_v11_FINAL.patch
        10 kB
        Evgeny Ryabitskiy
      12. HBASE-1017_v12_FINAL.patch
        12 kB
        Evgeny Ryabitskiy

        Issue Links

          Activity

          Hide
          stack added a comment -

          Tested latest version of patch. Had to do it on quiesced cluster because under load region count is all over place. Also, killing servers, didn't kill regionserver hosting meta because that makes a mess of counts too.

          But, killing non-catalog hosting regionserver, balance came back promptly. Adding in a new node after, balance again came back quickly. Did this a few times. Had enough regions that I should have had Jon's original issue if it had not been fixed.

          Thanks for the patch Evgeny.

          Show
          stack added a comment - Tested latest version of patch. Had to do it on quiesced cluster because under load region count is all over place. Also, killing servers, didn't kill regionserver hosting meta because that makes a mess of counts too. But, killing non-catalog hosting regionserver, balance came back promptly. Adding in a new node after, balance again came back quickly. Did this a few times. Had enough regions that I should have had Jon's original issue if it had not been fixed. Thanks for the patch Evgeny.
          Hide
          Evgeny Ryabitskiy added a comment -

          thx for revising my patch!

          • patch regenerated, ^Ms. removed
          • yes, getLoadToServers changed from public to default visibility
          • added more detailed doc for Load balancer

          yes, my fault..... sory
          While removing ServerManager refactor I forgot about several necessary changes there (that is why I started that refactor)

          ServerManager changes:

          • getAverageLoad returns accurate avg load (without round as it was before)
          • clean up garbage in loadToServers mapping (if there no servers with such load, then there no record with such load key). before there was record for every old load with empty (size ==0) servers value.

          First one need for more accurate balancing, second is because new LoadBalance maintains on loadToServers mapping and garbage from old loads making this logic to do wrong decisions

          now it should work in HBASE-1017_v12_FINAL.patch

          Show
          Evgeny Ryabitskiy added a comment - thx for revising my patch! patch regenerated, ^Ms. removed yes, getLoadToServers changed from public to default visibility added more detailed doc for Load balancer yes, my fault..... sory While removing ServerManager refactor I forgot about several necessary changes there (that is why I started that refactor) ServerManager changes: getAverageLoad returns accurate avg load (without round as it was before) clean up garbage in loadToServers mapping (if there no servers with such load, then there no record with such load key). before there was record for every old load with empty (size ==0) servers value. First one need for more accurate balancing, second is because new LoadBalance maintains on loadToServers mapping and garbage from old loads making this logic to do wrong decisions now it should work in HBASE-1017 _v12_FINAL.patch
          Hide
          stack added a comment -

          I took a look at this patch:

          + Remove the ^Ms.
          + getLoadToServers in ServerManager doesn't need to be public, right?
          + Test looks good and I like making a class to encapsulate load balancing logic. I'd suggest adding javadoc to the load balancer explaining how it works.

          I tried the code. I loaded up a bunch of regions, then shut it down. Restarted. All came up balanced after a little while. I then tried adding a server to the cluster which seems to be what Jon was doing above but it never got any regions:

          aa0-000-12.u.powerset.com:60031 1242680796620 requests=0, regions=0, usedHeap=27, maxHeap=1244
          aa0-000-13.u.powerset.com:60031 1242680136542 requests=0, regions=21, usedHeap=158, maxHeap=1244
          aa0-000-14.u.powerset.com:60031 1242680136673 requests=0, regions=20, usedHeap=71, maxHeap=1244
          aa0-000-15.u.powerset.com:60031 1242680136162 requests=0, regions=19, usedHeap=106, maxHeap=1244

          It stayed at zero. Wasn't this patch supposed to address that?

          Show
          stack added a comment - I took a look at this patch: + Remove the ^Ms. + getLoadToServers in ServerManager doesn't need to be public, right? + Test looks good and I like making a class to encapsulate load balancing logic. I'd suggest adding javadoc to the load balancer explaining how it works. I tried the code. I loaded up a bunch of regions, then shut it down. Restarted. All came up balanced after a little while. I then tried adding a server to the cluster which seems to be what Jon was doing above but it never got any regions: aa0-000-12.u.powerset.com:60031 1242680796620 requests=0, regions=0, usedHeap=27, maxHeap=1244 aa0-000-13.u.powerset.com:60031 1242680136542 requests=0, regions=21, usedHeap=158, maxHeap=1244 aa0-000-14.u.powerset.com:60031 1242680136673 requests=0, regions=20, usedHeap=71, maxHeap=1244 aa0-000-15.u.powerset.com:60031 1242680136162 requests=0, regions=19, usedHeap=106, maxHeap=1244 It stayed at zero. Wasn't this patch supposed to address that?
          Hide
          Evgeny Ryabitskiy added a comment -

          Without refactor to Server manager.
          final version
          maybe can seemed not so small change..... but don't have idea how to make it smaller

          Show
          Evgeny Ryabitskiy added a comment - Without refactor to Server manager. final version maybe can seemed not so small change..... but don't have idea how to make it smaller
          Hide
          Evgeny Ryabitskiy added a comment -

          loadbalance2.0.patch is for my mega cool low-centralised load balance algorithm...
          but it is prototype yet... just to show my new ideas
          and it's independent from other patches here

          What was idea:

          • Region Servers knows better what what regions to unassignee ... and can make own decisions about it.
          • For such decisions HRS will use LoadBalancer thread
          • To make such decisions HRS need to know current load situation in cluster (LoadMetrics)
          • HRS reading LoadMetrics record from ZK
          • If HRS can't get LoadMetrics record, it makes LoadBalance Slip
          • If HRS founds out that is is overloaded it closes some Regions
          • Master can update and put in ZK new LoadMetrics record with some frequency
          • LoadMetrics record contains: avgLoad, maxLoad, upLoadBound, lowLoadBound, uderloadinFactor
          • LoadMetrics is a class with that attributes and can be serialised to bytes and read from bytes
          • LoadMetrics record is a data of some special Ephemeral zNode in ZK, created by Master
          • Master still assigning closed regions to HRS, so balance if half-centralised (unnasigne is distributed and assignee is centralised)
          • in future master wil use a flag in LoadMetrics to stop unassigning if there too much closed Regions
          Show
          Evgeny Ryabitskiy added a comment - loadbalance2.0.patch is for my mega cool low-centralised load balance algorithm... but it is prototype yet... just to show my new ideas and it's independent from other patches here What was idea: Region Servers knows better what what regions to unassignee ... and can make own decisions about it. For such decisions HRS will use LoadBalancer thread To make such decisions HRS need to know current load situation in cluster (LoadMetrics) HRS reading LoadMetrics record from ZK If HRS can't get LoadMetrics record, it makes LoadBalance Slip If HRS founds out that is is overloaded it closes some Regions Master can update and put in ZK new LoadMetrics record with some frequency LoadMetrics record contains: avgLoad, maxLoad, upLoadBound, lowLoadBound, uderloadinFactor LoadMetrics is a class with that attributes and can be serialised to bytes and read from bytes LoadMetrics record is a data of some special Ephemeral zNode in ZK, created by Master Master still assigning closed regions to HRS, so balance if half-centralised (unnasigne is distributed and assignee is centralised) in future master wil use a flag in LoadMetrics to stop unassigning if there too much closed Regions
          Hide
          Evgeny Ryabitskiy added a comment -

          Patch is ready for revision.
          All JUnit tests are passed.

          Need testing on a real cluster. If anyone can help me on it?

          Show
          Evgeny Ryabitskiy added a comment - Patch is ready for revision. All JUnit tests are passed. Need testing on a real cluster. If anyone can help me on it?
          Hide
          Evgeny Ryabitskiy added a comment -

          About refactoring

          Server manager has mapping:

          • serverName 2 serverInfo,
          • serverAddr 2 serverInfo,
          • serverName 2 load,
          • load 2 severName

          1) serverName 2 load - not necessary if you have serverName 2 serverInfo
          2) All mappings are encapsulated in ServersInfo class (inner class of ServerManager)
          3) ServersInfo has operations for adding, updating and removing information of HRS

          About Load Balance Algorithm

          Previous check: If HRS load more then avg Load Plus Slop, the HRS is overloaded, close some regions (numToClose = currentRegions - avgLoad)

          Added check: If HRS is most loaded and lowest loaded HRS are loaded less then avgLoadMinusSlop then close some regions from most loaded (numToClose = min(currentRegions - avgLoad, (avgLoadMinusSlop - lowestLoad) * numLowestLoadedHRS) )

          Changes to JUnit for Region Balance:

          Assert check if loads of all HRS are in slop range after rebalnce.

          Number of HRS upped to 10 from 4.

          Show
          Evgeny Ryabitskiy added a comment - About refactoring Server manager has mapping: serverName 2 serverInfo, serverAddr 2 serverInfo, serverName 2 load, load 2 severName 1) serverName 2 load - not necessary if you have serverName 2 serverInfo 2) All mappings are encapsulated in ServersInfo class (inner class of ServerManager) 3) ServersInfo has operations for adding, updating and removing information of HRS About Load Balance Algorithm Previous check: If HRS load more then avg Load Plus Slop, the HRS is overloaded, close some regions (numToClose = currentRegions - avgLoad) Added check: If HRS is most loaded and lowest loaded HRS are loaded less then avgLoadMinusSlop then close some regions from most loaded (numToClose = min(currentRegions - avgLoad, (avgLoadMinusSlop - lowestLoad) * numLowestLoadedHRS) ) Changes to JUnit for Region Balance: Assert check if loads of all HRS are in slop range after rebalnce. Number of HRS upped to 10 from 4.
          Hide
          Evgeny Ryabitskiy added a comment -

          Last tested version. should do everything

          Show
          Evgeny Ryabitskiy added a comment - Last tested version. should do everything
          Hide
          Evgeny Ryabitskiy added a comment -

          Added new assertion for TestRegionRebalancing scenario.

          Show
          Evgeny Ryabitskiy added a comment - Added new assertion for TestRegionRebalancing scenario.
          Hide
          Evgeny Ryabitskiy added a comment -

          Extract balancing to LoadBalancer class + more re-factoring + sync wit SVN

          Show
          Evgeny Ryabitskiy added a comment - Extract balancing to LoadBalancer class + more re-factoring + sync wit SVN
          Hide
          Evgeny Ryabitskiy added a comment -

          Same algorithm + some code reorganisation + some refactoring to ServerManager

          Show
          Evgeny Ryabitskiy added a comment - Same algorithm + some code reorganisation + some refactoring to ServerManager
          Hide
          Evgeny Ryabitskiy added a comment -

          First version of this logic. It's outline that can be improved.

          Show
          Evgeny Ryabitskiy added a comment - First version of this logic. It's outline that can be improved.
          Hide
          Andrew Purtell added a comment -

          [21:07] <apurtell> i was going to work on the balancer a while back but got swamped. it will take some time to get right. actually i proposed load leveling which would not help in the scenario described if no other node considers itself overloaded.

          [21:09] <jgray2> apurtell: yes load is most important, and with a shift towards using memory more, that's going to matter as well

          [21:10] <apurtell> jgray2: rarely used regions should shed their indices and hstore fds anyway imho

          [21:10] <jgray2> i agree

          [21:11] <jgray2> and that all should be taken into account

          Show
          Andrew Purtell added a comment - [21:07] <apurtell> i was going to work on the balancer a while back but got swamped. it will take some time to get right. actually i proposed load leveling which would not help in the scenario described if no other node considers itself overloaded. [21:09] <jgray2> apurtell: yes load is most important, and with a shift towards using memory more, that's going to matter as well [21:10] <apurtell> jgray2: rarely used regions should shed their indices and hstore fds anyway imho [21:10] <jgray2> i agree [21:11] <jgray2> and that all should be taken into account

            People

            • Assignee:
              Evgeny Ryabitskiy
              Reporter:
              Jonathan Gray
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development