Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-10127

LIRS enforcement of tombstone limit has pathological performance scenarios

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Blocker
    • Resolution: Fixed
    • Impala 4.0.0
    • Impala 4.0.0
    • Backend
    • None

    Description

      LIRS maintains metadata for some tombstone entries that have been evicted from the cache (but might be seen again). To limit the memory used for these entries, the lirs_tombstone_multiple limits the total number of tombstone entries. The enforcement walks through the recency list from oldest to newest and removes tombstone entries until it is back underneath the limit. This requires it to go past a certain number of non-tombstone entries before it reaches a tombstone entry. There are pathological cases where this enforcement needs to walk past a very large number of entries before reaching a tombstone entry.

      Suppose that the cache never accesses the same entry more than once. Starting from empty, the first entries representing 95% of the capacity are automatically considered protected. The subsequent accesses are all unprotected. In order to dislodge a protected entry, an entry needs to accessed more than once. If every entry is unique, the protected entries are never touched again and form a contiguous block as the oldest entries on the recency list. Tombstone entries are above them, and unprotected elements are the newest entries on the recency list. It looks like this:

      Oldest

      Protected entries (representing 95% of cache capacity)

      ...

      Tombstone entries

      ...

      Unprotected entries (representing 5% of cache capacity)

      Newest

      To enforce the tombstone limit, it would need to pass all the protected entries to reach a single tombstone entry. I modified cache-bench to add a case with UNIFORM distribution but a 500x ratio of entries to the cache size. This shows pathological performance compared to the 3x ratio:

      I0831 18:22:06.356406  2605 cache-bench.cc:180] Warming up...
      I0831 18:22:07.357687  2605 cache-bench.cc:183] Running benchmark...
      I0831 18:22:22.358944  2605 cache-bench.cc:191] UNIFORM ratio=3.00x n_unique=786432: 3.48M lookups/sec <---- FINE
      I0831 18:22:22.358958  2605 cache-bench.cc:192] UNIFORM ratio=3.00x n_unique=786432: 33.3% hit rate
      I0831 18:22:22.961802  2605 cache-bench.cc:180] Warming up...
      I0831 18:22:24.010735  2605 cache-bench.cc:183] Running benchmark...
      I0831 18:22:39.026588  2605 cache-bench.cc:191] UNIFORM ratio=500.00x n_unique=131072000: 1.31k lookups/sec <----- OUCH
      I0831 18:22:39.026614  2605 cache-bench.cc:192] UNIFORM ratio=500.00x n_unique=131072000: 0.2% hit rate

      We should rework the enforcement of the tombstone limit to avoid walking past all those entries. One option is to keep the tombstone entries on their own list.

      Note that without the tombstone limit, this pathological case would use an unbounded amount of memory (because the tombstone entries would never be reach the bottom of the recency list and get removed).

      Attachments

        Activity

          People

            joemcdonnell Joe McDonnell
            joemcdonnell Joe McDonnell
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: