Just ran across a scenario when a user can inadvertently introduce a memory leak in their app. This happens when an app is using query caching with JCacheQueryCache and EhCache provider in the backend, and the cache key space is large / growing. The last criterion is met when local cache is in use and new DataContexts are created for new jobs/requests (each DataContext introduces its own key subspace). In this case cache entries (including their DataContexts) are retained in memory indefinitely, eventually causing the app to crash with OutOfMemory
If there is a query with a cache group that does not have a cache configured explicitly in the backend (in "ehcache.xml"), JCacheQueryCache creates a new cache on the fly using JCacheDefaultConfigurationFactory. While JCacheDefaultConfigurationFactory has a default expiration of 10 minutes, it doesn't have an upper limit on the number of entries (there's no API in JCache to set it), so such cache becomes essentially boundless.
Since cache groups are assigned by query, and their number can increase as the app evolves, it is very easy to overlook the need for a matching <cache> configuration entry. So previously stable apps can suddenly acquire such time bombs as they evolve over time.
I wish we were able to create caches with fixed size bounds, but I don't see how to do it in JCache. So a minimal possible solution would be to print a big warning in the logs whenever we need to call "JCacheQueryCache.createCache".
In the future versions we might replace the warning with an exception (??) (Or make this behavior - warn vs exception - configurable via a property).