Details

    • Sub-task
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • None
    • io
    • None
    • Hide
      Use the LruBlockCache default if your data fits the blockcache. If block cache churn or you want a block cache that is immune to the vagaries of BC, deploy the offheap bucketcache. See http://people.apache.org/~stack/bc/
      Show
      Use the LruBlockCache default if your data fits the blockcache. If block cache churn or you want a block cache that is immune to the vagaries of BC, deploy the offheap bucketcache. See http://people.apache.org/~stack/bc/

    Description

      One way to realize the parent issue is to just enable bucket cache all the time; i.e. always have offheap enabled. Would have to do some work to make it drop-dead simple on initial setup (I think it doable).

      So, upside would be the offheap upsides (less GC, less likely to go away and never come back because of full GC when heap is large, etc.).

      Downside is higher latency. In Nick's BlockCache 101 there is little to no difference between onheap and offheap. In a basic compare doing scans and gets – details to follow – I have BucketCache deploy about 20% less ops than LRUBC when all incache and maybe 10% less ops when falling out of cache. I can't tell difference in means and 95th and 99th are roughly same (more stable with BucketCache). GC profile is much better with BucketCache – way less. BucketCache uses about 7% more user CPU.

      More detail on comparison to follow.

      I think the numbers disagree enough we should probably do the lhofhansl suggestion, that we allow you to have a table sit in LRUBC, something the current bucket cache layout does not do.

      Attachments

        Issue Links

          Activity

            People

              stack Michael Stack
              stack Michael Stack
              Votes:
              0 Vote for this issue
              Watchers:
              16 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: