Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-15560

TinyLFU-based BlockCache

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 2.0.0
    • 3.0.0-alpha-1, 2.3.0
    • BlockCache
    • None
    • Reviewed
    • Hide
      LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and recency of the working set. It achieves concurrency by using an O(n) background thread to prioritize the entries and evict. Accessing an entry is O(1) by a hash table lookup, recording its logical access time, and setting a frequency flag. A write is performed in O(1) time by updating the hash table and triggering an async eviction thread. This provides ideal concurrency and minimizes the latencies by penalizing the thread instead of the caller. However the policy does not age the frequencies and may not be resilient to various workload patterns.

      This change introduces a new L1 policy, TinyLfuBlockCache, which records the frequency in a counting sketch, ages periodically by halving the counters, and orders entries by SLRU. An entry is discarded by comparing the frequency of the new arrival to the SLRU's victim, and keeping the one with the highest frequency. This allows the operations to be performed in O(1) time and, though the use of a compact sketch, a much larger history is retained beyond the current working set. In a variety of real world traces the policy had near optimal hit rates.

      New configuration variable hfile.block.cache.policy sets the eviction policy for the L1 block cache. The default is "LRU" (LruBlockCache). Set to "TinyLFU" to use TinyLfuBlockCache instead.
      Show
      LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and recency of the working set. It achieves concurrency by using an O(n) background thread to prioritize the entries and evict. Accessing an entry is O(1) by a hash table lookup, recording its logical access time, and setting a frequency flag. A write is performed in O(1) time by updating the hash table and triggering an async eviction thread. This provides ideal concurrency and minimizes the latencies by penalizing the thread instead of the caller. However the policy does not age the frequencies and may not be resilient to various workload patterns. This change introduces a new L1 policy, TinyLfuBlockCache, which records the frequency in a counting sketch, ages periodically by halving the counters, and orders entries by SLRU. An entry is discarded by comparing the frequency of the new arrival to the SLRU's victim, and keeping the one with the highest frequency. This allows the operations to be performed in O(1) time and, though the use of a compact sketch, a much larger history is retained beyond the current working set. In a variety of real world traces the policy had near optimal hit rates. New configuration variable hfile.block.cache.policy sets the eviction policy for the L1 block cache. The default is "LRU" (LruBlockCache). Set to "TinyLFU" to use TinyLfuBlockCache instead.

    Description

      LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and recency of the working set. It achieves concurrency by using an O( n ) background thread to prioritize the entries and evict. Accessing an entry is O(1) by a hash table lookup, recording its logical access time, and setting a frequency flag. A write is performed in O(1) time by updating the hash table and triggering an async eviction thread. This provides ideal concurrency and minimizes the latencies by penalizing the thread instead of the caller. However the policy does not age the frequencies and may not be resilient to various workload patterns.

      W-TinyLFU (research paper) records the frequency in a counting sketch, ages periodically by halving the counters, and orders entries by SLRU. An entry is discarded by comparing the frequency of the new arrival (candidate) to the SLRU's victim, and keeping the one with the highest frequency. This allows the operations to be performed in O(1) time and, though the use of a compact sketch, a much larger history is retained beyond the current working set. In a variety of real world traces the policy had near optimal hit rates.

      Concurrency is achieved by buffering and replaying the operations, similar to a write-ahead log. A read is recorded into a striped ring buffer and writes to a queue. The operations are applied in batches under a try-lock by an asynchronous thread, thereby track the usage pattern without incurring high latencies (benchmarks).

      In YCSB benchmarks the results were inconclusive. For a large cache (99% hit rates) the two caches have near identical throughput and latencies with LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 1-4% hit rate improvement and therefore lower latencies. The lack luster result is because a synthetic Zipfian distribution is used, which SLRU performs optimally. In a more varied, real-world workload we'd expect to see improvements by being able to make smarter predictions.

      The provided patch implements BlockCache using the Caffeine caching library (see HighScalability article).

      Edward Bortnikov and Eshcar Hillel have graciously provided guidance for evaluating this patch (github branch).

      Attachments

        1. HBASE-15560.patch
          55 kB
          Andrew Kyle Purtell
        2. HBASE-15560.patch
          55 kB
          Andrew Kyle Purtell
        3. HBASE-15560.patch
          54 kB
          Andrew Kyle Purtell
        4. HBASE-15560.patch
          54 kB
          Andrew Kyle Purtell
        5. HBASE-15560.patch
          54 kB
          Andrew Kyle Purtell
        6. run_ycsb_c.sh
          6 kB
          Michael Stack
        7. run_ycsb_loading.sh
          3 kB
          Michael Stack
        8. branch-1.tinylfu.txt
          100 kB
          Michael Stack
        9. gets
          19 kB
          Michael Stack
        10. bc.miss.count
          25 kB
          Michael Stack
        11. bc.hit.count
          21 kB
          Michael Stack
        12. HBASE-15560.patch
          57 kB
          Ben Manes
        13. HBASE-15560.patch
          66 kB
          Ben Manes
        14. HBASE-15560.patch
          66 kB
          Ben Manes
        15. HBASE-15560.patch
          66 kB
          Ben Manes
        16. HBASE-15560.patch
          65 kB
          Ben Manes
        17. HBASE-15560.patch
          64 kB
          Ben Manes
        18. HBASE-15560.patch
          33 kB
          Ben Manes
        19. tinylfu.patch
          33 kB
          Ben Manes

        Issue Links

          Activity

            People

              ben.manes Ben Manes
              ben.manes Ben Manes
              Votes:
              0 Vote for this issue
              Watchers:
              36 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: