Uploaded image for project: 'Ignite'
  1. Ignite
  2. IGNITE-6934

SQL: evaluate performance of onheap row caching

    XMLWordPrintableJSON

Details

    • Task
    • Status: Closed
    • Major
    • Resolution: Won't Fix
    • None
    • None
    • sql

    Description

      Ignite has so-called "on heap cache" feature. When cache entry is accessed, we copy it from offheap to heap and put it into a temporal concurrent hash map ([1], [2]), where it resides during usage. When operation is finished, entry is evicted. This is default behavior which keeps GC pressure low even for large in-memory data sets.

      The downside is that we loose time on copying from offheap to heap. To mitigate this problem user can enable on-heap cache through IgniteCache.onheapCacheEnabled. In this mode entry will not be evicted from on-heap map, so it can be reused between different operations without additional copying. Eviction rules are managed through eviction policy.

      Unfortunately, SQL cannot use this optimization. As a result if key or value is large enough, we loose a lot of time on memory copying. And we cannot use current on-heap cache directly, we in SQL operate on row links, rather than on keys. So to apply this optimization to SQL we should either create additional row cache, or hack existing cache somehow.

      As a first step I propose to evaluate the impact with quick and dirty solution:
      1) Just add another map from link to K-V pair in the same cache, putting all concurrency issues aside.
      2) Use this cache from SQL engine.
      3) Measure the impact.

      [1] org.apache.ignite.internal.processors.cache.GridCacheConcurrentMapImpl
      [2] org.apache.ignite.internal.processors.cache.GridCacheConcurrentMap.CacheMapHolder

      Attachments

        Activity

          People

            Unassigned Unassigned
            vozerov Vladimir Ozerov
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: