Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12601

Implement new hdfs balancer's threshold units

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 2.6.5, 2.7.4, 3.0.0-alpha3
    • None
    • balancer & mover
    • None

    Description

      Balancer threshold unit is inappropriate in most cases for new clusters, that have a lot of capacity and small used%.

      For example, in one of our new clusters HDFS capacity is 2.2 Pb and only 160Tb used (across all DNs). So 1% threshold equals to 0.55 Tb (there are 40 nodes in this cluster) for `hdfs balancer -threshold` parameter.
      Now we have some DNs that have as low as *3.5*Tb
      and other DNs as high as 4.6 Tb.

      So the actual disbalance is more like 24%.
      `hdfs balancer -threshold 1` command says there is nothing to balance (and I can't put smaller value than 1).
      Balancer now thinks that the disbalance is less than 1% (based on full capacity),
      when it's actually 24%.

      We see that those nodes with more data actually getting more processing tasks
      (because of data locality).

      It would be great to introduce a suffix for -threshold balancer parameter:

      • 10c ('c' for `c`apacity) would mean 10% from DN's capacity (current behavior, will default to 'c' if not specified so this change is backward compatible);
      • 10u ('u' for `u`sed space variance across all DNs) would be measured as %min_used / %max_used. For the example above, the cluster would get rebalanced correctly as current disbalance is 24%.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              Tagar Ruslan Dautkhanov
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated: