HBase
  1. HBase
  2. HBASE-10501

Improve IncreasingToUpperBoundRegionSplitPolicy to avoid too many regions

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.96.2, 0.98.1, 0.99.0, 0.94.17
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      Changes the default split policy to avoid too many regions with default settings.
      The old policy calculates the split size at each RS as follows: MIN(maxFileSize, flushSize*NoRegions^2) (NoRegions is the number of regions for the table in question seen on this RS)

      The new policy calculates the size this way: MIN(maxFileSize, flushSize*2*NoRegions^3)
      Note that the initial split size is now 2 * the flushSize. With default settings it increased from 128mb to 256mb.

      The new policy still allows spreading out the regions over the cluster quickly, but then grows the desired size fairly quickly in order to avoid too many regions per RS.
      Show
      Changes the default split policy to avoid too many regions with default settings. The old policy calculates the split size at each RS as follows: MIN(maxFileSize, flushSize*NoRegions^2) (NoRegions is the number of regions for the table in question seen on this RS) The new policy calculates the size this way: MIN(maxFileSize, flushSize*2*NoRegions^3) Note that the initial split size is now 2 * the flushSize. With default settings it increased from 128mb to 256mb. The new policy still allows spreading out the regions over the cluster quickly, but then grows the desired size fairly quickly in order to avoid too many regions per RS.

      Description

      During some (admittedly artificial) load testing we found a large amount split activity, which we tracked down the IncreasingToUpperBoundRegionSplitPolicy.

      The current logic is this (from the comments):
      "regions that are on this server that all are of the same table, squared, times the region flush size OR the maximum region split size, whichever is smaller"

      So with a flush size of 128mb and max file size of 20gb, we'd need 13 region of the same table on an RS to reach the max size.
      With 10gb file sized it is still 9 regions of the same table.
      Considering that the number of regions that an RS can carry is limited and there might be multiple tables, this should be more configurable.

      I think the squaring is smart and we do not need to change it.

      We could

      • Make the start size configurable and default it to the flush size
      • Add multiplier for the initial size, i.e. start with n * flushSize
      • Also change the default to start with 2*flush size

      Of course one can override the default split policy, but these seem like simple tweaks.

      Or we could instead set the goal of how many regions of the same table would need to be present in order to reach the max size. In that case we'd start with maxSize/goal^2. So if max size is 20gb and the goal is three we'd start with 20g/9 = 2.2g for the initial region size.

      stack, I'm especially interested in your opinion.

      1. 10501-0.94.txt
        2 kB
        Lars Hofhansl
      2. 10501-0.94-v2.txt
        3 kB
        Lars Hofhansl
      3. 10501-0.94-v3.txt
        3 kB
        Lars Hofhansl
      4. 10501-0.94-v4.txt
        3 kB
        Lars Hofhansl
      5. 10501-trunk.txt
        7 kB
        Lars Hofhansl
      6. 10501-trunk-v2.txt
        5 kB
        Lars Hofhansl
      7. 10501-0.94-v5.txt
        5 kB
        Lars Hofhansl

        Activity

          People

          • Assignee:
            Lars Hofhansl
            Reporter:
            Lars Hofhansl
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development