Uploaded image for project: 'Apache Ozone'
  1. Apache Ozone
  2. HDDS-8765

OM (read) lock performance improvement

    XMLWordPrintableJSON

Details

    Description

      Problem

      Today, OM manages resource (volume, bucket) locks by LockManager, a component enclosing a lock table as a ConcurrentHashMap to store all active locks. Locks are dynamically allocated and destroyed in the lock table based on runtime needs. This means, for every lock allocated, a usage count is kept up to date to decide when the lock is no longer needed. 

      The current performance of LockManager is limited by the cost of maintaining individual lock liveness, aka, counting how many concurrent usages to a lock and removing it from the lock table when it's no longer used.

      This cost mainly incurs from the need to synchronize all the concurrent access to every lock (or technically, a ConcurrentHashMap's node) when:

      1. When getting the lock to obtain: create a lock object if it's not existing in the table and increase the lock usage count.
      2. When releasing the lock: decrease the usage count and remove the lock when the usage count is 0.

      This synchronization is done internally inside ConcurrentHashMap's two methods: compute and computeIfPresent.

      This synchronization creates a bottleneck when multiple threads try to obtain and release the same lock, even for read locks.

      Experiment

      I did an experiment of pure OM key reads in the same buckets with 100 reader threads. The freon command looks like the following:

      ozone freon ockrw -v duong -b obs --contiguous -n 50000000 -p zerobytes --percentage-read=100 -r 5000000 -s 0 --size 0 -m -t 100 

      With the current code, the total OPPS tops at ~100K and getKeyInfo latency is ~800μs. The time taken to get a lock for obtaining and releasing the lock is ~40μs. Note that for each getKeyInfo request, OM obtains and releases volume and bucket locks multiple times. 

      With a quick and naive change to remove the synchronization when getting and releasing the locks, the getKeyInfo latency drops to ~400μs and total OPPS raises to ~160K. Time to get and release lock drops to 4-5μs. Please note that this is just to demonstrate the impact of the synchronization and not a practical change, as synchronization is substantial to the correctness of the dynamic lock management. 

      Proposed solution

      It's been shown that dynamic lock allocation (and deallocation) per resource is costly, especially for hotspots like bucket/volume level locks. Synchronization is costly even for a single thread access.

      A simpler and more performant solution is lock stripping. The idea can be easily explained as if we preallocate an array of locks and select the lock for a resource based on its hashcode % array_size. Locks should be preallocated, reused and never deallocated.

      Implementation-wise, we don't need to reinvent the wheel but use Guava's Striped.

      To minimize the chance of collision (different resources mapped to the same lock), the array size (or Striped size in Guava term) can be a product of the number of worker threads. A 10x factor would result in ~0% of collision.

      We may also introduce separate Striped for each resource type like volume, bucket and key to avoid cross-resource-type collisions. That also allows having different stripes size that is appropriate with specific resource cardinality. E.g. we want to have a bigger size for key locks than bucket locks. 

      The lock stripe size should be configurable. Locks-related metrics (e.g. waiting time) can be used to identify collisions and tune the stripe size.

      Pros

      • Fast and scalable, eliminate the cost of dynamic allocation/deallocation of locks, especially for hotspots (e.g. hot bucket level locks) or high cardinality (e.g. key level locks). This will be a good foundation for key-level locking that we're planning to do.
      • Simple, no synchronization complexity, no deallocation, available library.  
      • GC friendly.

      Cons

      • Introduce a risk of lock collision, yet can be eliminated by extending the lock array to e.g. 10x of the number of threads (which is not expensive).

       

      Attachments

        1. image-2023-06-05-22-49-47-147.png
          1.85 MB
          Duong
        2. latency-after.png
          2.36 MB
          Duong
        3. latency-before.png
          1.85 MB
          Duong
        4. Screenshot 2023-06-05 at 4.31.14 PM.png
          879 kB
          Duong
        5. Striped locks in Ozone Manager.pdf
          448 kB
          Duong

        Issue Links

          Activity

            People

              duongnguyen Duong
              duongnguyen Duong
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: