Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-18164

Much faster locality cost function and candidate generator

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 1.4.0, 2.0.0-alpha-2, 2.0.0
    • Component/s: Balancer
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      New locality cost function and candidate generator that use caching and incremental computation to allow the stochastic load balancer to consider ~20x more cluster configurations for big clusters.
    • Flags:
      Patch

      Description

      We noticed that during the stochastic load balancer was not scaling well with cluster size. That is to say that on our smaller clusters (~17 tables, ~12 region servers, ~5k regions), the balancer considers ~100,000 cluster configurations in 60s per balancer run, but only ~5,000 per 60s on our bigger clusters (~82 tables, ~160 region servers, ~13k regions) .

      Because of this, our bigger clusters are not able to converge on balance as quickly for things like table skew, region load, etc. because the balancer does not have enough time to "think".

      We have re-written the locality cost function to be incremental, meaning it only recomputes cost based on the most recent region move proposed by the balancer, rather than recomputing the cost across all regions/servers every iteration.

      Further, we also cache the locality of every region on every server at the beginning of the balancer's execution for both the LocalityBasedCostFunction and the LocalityCandidateGenerator to reference. This way, they need not collect all HDFS blocks of every region at each iteration of the balancer.

      The changes have been running in all 6 of our production clusters and all 4 QA clusters without issue. The speed improvements we noticed are massive. Our big clusters now consider 20x more cluster configurations.

      One design decision I made is to consider locality cost as the difference between the best locality that is possible given the current cluster state, and the currently measured locality. The old locality computation would measure the locality cost as the difference from the current locality and 100% locality, but this new computation instead takes the difference between the current locality for a given region and the best locality for that region in the cluster.

        Attachments

        1. HBASE-18164-00.patch
          26 kB
          Kahlil Oppenheimer
        2. HBASE-18164-01.patch
          38 kB
          Kahlil Oppenheimer
        3. HBASE-18164-02.patch
          31 kB
          Kahlil Oppenheimer
        4. HBASE-18164-04.patch
          31 kB
          Kahlil Oppenheimer
        5. HBASE-18164-05.patch
          31 kB
          Kahlil Oppenheimer
        6. HBASE-18164-06.patch
          31 kB
          Kahlil Oppenheimer
        7. HBASE-18164-07.patch
          2 kB
          Kahlil Oppenheimer
        8. HBASE-18164-08.patch
          2 kB
          Kahlil Oppenheimer
        9. 18164.branch-1.addendum.txt
          1 kB
          Ted Yu

          Issue Links

            Activity

              People

              • Assignee:
                kahliloppenheimer Kahlil Oppenheimer
                Reporter:
                kahliloppenheimer Kahlil Oppenheimer
              • Votes:
                0 Vote for this issue
                Watchers:
                17 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: