Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-8884

Pluggable RpcScheduler

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.98.0
    • IPC/RPC
    • None
    • Reviewed

    Description

      Today, the RPC scheduling mechanism is pretty simple: it execute requests in isolated thread-pools based on their priority. In the current implementation, all normal get/put requests are using the same pool. We'd like to add some per-user or per-region level isolation, so that a misbehaved user/region will not saturate the thread-pool and cause DoS to others easily. The idea is similar to FairScheduler in MR. The current scheduling code is not standalone and is mixed with others (Connection#processRequest). The issue is the first step to extract it to an interface, so that people are free to write and test their own implementations.

      This patch doesn't make it completely pluggable yet, as some parameters are pass from constructor. This is because HMaster and HRegionServer both use RpcServer and they have different thread-pool size config. Let me know if you have a solution to this.

      Attachments

        1. hbase-8884.patch
          27 kB
          Chao Shi
        2. hbase-8884-v2.patch
          43 kB
          Chao Shi
        3. hbase-8884-v3.patch
          42 kB
          Chao Shi
        4. hbase-8884-v4.patch
          44 kB
          Chao Shi
        5. hbase-8884-v5.patch
          44 kB
          Chao Shi
        6. hbase-8884-v6.patch
          45 kB
          Chao Shi
        7. hbase-8884-v7.patch
          46 kB
          Chao Shi
        8. hbase-8884-v8.patch
          49 kB
          Chao Shi

        Issue Links

          Activity

            People

              stepinto Chao Shi
              stepinto Chao Shi
              Votes:
              0 Vote for this issue
              Watchers:
              21 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: