Uploaded image for project: 'Apache Ozone'
  1. Apache Ozone
  2. HDDS-4656 Add a container balancer tool or service for HDDS
  3. HDDS-5757

balancer should stop when the cluster can not be balanced any more

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 1.2.0
    • None

    Description

      when i test container balancer in k8s cluster,  i use the command line :

      ./ozone admin containerbalancer start -i -1 -t 0.000001 -d 1 -s 500, and set the contaienr size to 1G.

      i found that balancer thread can not stop when the cluster is close to balance.

      in my cluster , i have three datanodes d1,d2,d3(disk usage are 67G, 67G, 68G), and the fourth d4 datanode's disk usage is 1G. when i start balancer, it begin to balance and work well, many containers have been moved from d1,d2,d3 to d4. but when the cluster is close to balance(the disk usages of the four datanodes are 50G,52G,50G,51G), the balancer is still running , it will move container among those datanodes again and again.

      let us see a general example. if we have two datanode d1 and  d2,  the disk usage are 3G and 7G respectively, then the average usage is 5G. if  we set the threshold to 0.00001(close to 0), the lowerLimit and upperLimit is close to 5G. if we set the container size to 4G,  in the first iteraion, we can recognize the over-utilized datanode d2 and the under-utilized datanode d1, then we schedule a move option from d2 to d1 with a 4G container. after move option is finished , d1 is 7G and d2 is 3G. so the balancer thread will go on for ever.

       so we should let the balancer thread exit when the cluster can not be more balanced

       

      Attachments

        Issue Links

          Activity

            People

              jacksonyao Jie Yao
              jacksonyao Jie Yao
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: