Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-16698

Performance issue: handlers stuck waiting for CountDownLatch inside WALKey#getWriteEntry under high writing workload

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.2.3
    • 1.4.0, 2.0.0
    • Performance
    • None
    • Reviewed
    • Hide
      Assign sequenceid to an edit before we go on the ringbuffer; undoes contention on WALKey latch. Adds a new config "hbase.hregion.mvcc.preassign" which defaults to true: i.e. this speedup is enabled.

      User could set this per-table level, like:
      create 'table',{NAME=>'f1',CONFIGURATION=>{'hbase.hregion.mvcc.preassign'=>'false'}}
      Show
      Assign sequenceid to an edit before we go on the ringbuffer; undoes contention on WALKey latch. Adds a new config "hbase.hregion.mvcc.preassign" which defaults to true: i.e. this speedup is enabled. User could set this per-table level, like: create 'table',{NAME=>'f1',CONFIGURATION=>{'hbase.hregion.mvcc.preassign'=>'false'}}

    Description

      As titled, on our production environment we observed 98 out of 128 handlers get stuck waiting for the CountDownLatch seqNumAssignedLatch inside WALKey#getWriteEntry under a high writing workload.

      After digging into the problem, we found that the problem is mainly caused by advancing mvcc in the append logic. Below is some detailed analysis:

      Under current branch-1 code logic, all batch puts will call WALKey#getWriteEntry after appending edit to WAL, and seqNumAssignedLatch is only released when the relative append call is handled by RingBufferEventHandler (see FSWALEntry#stampRegionSequenceId). Because currently we're using a single event handler for the ringbuffer, the append calls are handled one by one (actually lot's of our current logic depending on this sequential dealing logic), and this becomes a bottleneck under high writing workload.

      The worst part is that by default we only use one WAL per RS, so appends on all regions are dealt with in sequential, which causes contention among different regions...

      To fix this, we could also take use of the "sequential appends" mechanism, that we could grab the WriteEntry before publishing append onto ringbuffer and use it as sequence id, only that we need to add a lock to make "grab WriteEntry" and "append edit" a transaction. This will still cause contention inside a region but could avoid contention between different regions. This solution is already verified in our online environment and proved to be effective.

      Notice that for master (2.0) branch since we already change the write pipeline to sync before writing memstore (HBASE-15158), this issue only exists for the ASYNC_WAL writes scenario.

      Attachments

        1. hadoop0495.et2.jstack
          348 kB
          Yu Li
        2. HBASE-16698.branch-1.patch
          15 kB
          Yu Li
        3. HBASE-16698.branch-1.v2.patch
          15 kB
          Michael Stack
        4. HBASE-16698.branch-1.v2.patch
          15 kB
          Yu Li
        5. HBASE-16698.patch
          12 kB
          Yu Li
        6. HBASE-16698.v2.patch
          12 kB
          Yu Li

        Issue Links

          Activity

            People

              liyu Yu Li
              liyu Yu Li
              Votes:
              0 Vote for this issue
              Watchers:
              24 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: