Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-9501

Provide throttling for replication

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.98.1, 0.99.0
    • Replication
    • None
    • Reviewed
    • A new configuration replication.source.per.peer.node.bandwidth is added by this jira. the default is 0 which means no throttling. the unit of this configuration is bytes-per-second.

    Description

      When we disable a peer for a time of period, and then enable it, the ReplicationSource in master cluster will push the accumulated hlog entries during the disabled interval to the re-enabled peer cluster at full speed.

      If the bandwidth of the two clusters is shared by different applications, the push at full speed for replication can use all the bandwidth and severely influence other applications.

      Though there are two config replication.source.size.capacity and replication.source.nb.capacity to tweak the batch size each time a push delivers, but if decrease these two configs, the number of pushes increase, and all these pushes proceed continuously without pause. And no obvious help for the bandwidth throttling.

      From bandwidth-sharing and push-speed perspective, it's more reasonable to provide a bandwidth up limit for each peer push channel, and within that limit, peer can choose a big batch size for each push for bandwidth efficiency.

      Any opinion?

      Attachments

        1. HBASE-9501-trunk_v0.patch
          4 kB
          Honghua Feng
        2. HBASE-9501-trunk_v1.patch
          11 kB
          Honghua Feng
        3. HBASE-9501-trunk_v2.patch
          10 kB
          Honghua Feng
        4. HBASE-9501-trunk_v3.patch
          12 kB
          Honghua Feng
        5. HBASE-9501-trunk_v4.patch
          12 kB
          Jean-Daniel Cryans

        Activity

          People

            fenghh Honghua Feng
            fenghh Honghua Feng
            Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: