Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-14973

Balancer getBlocks RPC dispersal does not function properly

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.9.0, 2.7.4, 2.8.2, 3.0.0
    • 3.3.0, 3.1.4, 3.2.2, 2.10.1
    • balancer & mover
    • None

    Description

      In HDFS-11384, a mechanism was added to make the getBlocks RPC calls issued by the Balancer/Mover more dispersed, to alleviate load on the NameNode, since getBlocks can be very expensive and the Balancer should not impact normal cluster operation.

      Unfortunately, this functionality does not function as expected, especially when the dispatcher thread count is low. The primary issue is that the delay is applied only to the first N threads that are submitted to the dispatcher's executor, where N is the size of the dispatcher's threadpool, but not to the first R threads, where R is the number of allowed getBlocks QPS (currently hardcoded to 20). For example, if the threadpool size is 100 (the default), threads 0-19 have no delay, 20-99 have increased levels of delay, and 100+ have no delay. As I understand it, the intent of the logic was that the delay applied to the first 100 threads would force the dispatcher executor's threads to all be consumed, thus blocking subsequent (non-delayed) threads until the delay period has expired. However, threads 0-19 can finish very quickly (their work can often be fulfilled in the time it takes to execute a single getBlocks RPC, on the order of tens of milliseconds), thus opening up 20 new slots in the executor, which are then consumed by non-delayed threads 100-119, and so on. So, although 80 threads have had a delay applied, the non-delay threads rush through in the 20 non-delay slots.

      This problem gets even worse when the dispatcher threadpool size is less than the max getBlocks QPS. For example, if the threadpool size is 10, no threads ever have a delay applied, and the feature is not enabled at all.

      This problem wasn't surfaced in the original JIRA because the test incorrectly measured the period across which getBlocks RPCs were distributed. The variables startGetBlocksTime and endGetBlocksTime were used to track the time over which the getBlocks calls were made. However, startGetBlocksTime was initialized at the time of creation of the FSNameystem spy, which is before the mock DataNodes are started. Even worse, the Balancer in this test takes 2 iterations to complete balancing the cluster, so the time period endGetBlocksTime - startGetBlocksTime actually represents:

      (time to submit getBlocks RPCs) + (DataNode startup time) + (time for the Dispatcher to complete an iteration of moving blocks)
      

      Thus, the RPC QPS reported by the test is much lower than the RPC QPS seen during the period of initial block fetching.

      Attachments

        1. HDFS-14973-branch-2.005.patch
          26 kB
          Erik Krogen
        2. HDFS-14973-branch-2.004.patch
          25 kB
          Erik Krogen
        3. HDFS-14973-branch-2.003.patch
          24 kB
          Erik Krogen
        4. HDFS-14973.test.patch
          12 kB
          Erik Krogen
        5. HDFS-14973.003.patch
          17 kB
          Erik Krogen
        6. HDFS-14973.002.patch
          24 kB
          Erik Krogen
        7. HDFS-14973.001.patch
          24 kB
          Erik Krogen
        8. HDFS-14973.000.patch
          23 kB
          Erik Krogen

        Issue Links

          Activity

            People

              xkrogen Erik Krogen
              xkrogen Erik Krogen
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: