LRUQueryCache's CachingWeightWrapper works this way:
- first it looks up the cache to see if there is an entry for the query in the current leaf
- if yes, it returns it
- otherwise it checks whether the query should be cached on this leaf
- if yes, it builds a cache entry and returns it
- otherwise it returns a scorer built from the wrapped weight
The potential issue is that this first step always takes the lock, and I have seen a couple cases where indices were small and/or queries were very cheap and this showed up as a bottleneck. On the other hand, we have checks in step 3 that tell the cache to not cache on a particular segment regardless of the query. So I would like to move that part before 1 so that we do not even take the lock in that case.
For instance right now we require that segments have at least 10k documents and 3% of all docs in the index to be cached. I just looked at a random index that contains 1.7m documents, and only 4 segments out of 29 met this criterion (yet they contain 1.1m documents: 65% of the total index size). So in the case of that index, we would take the lock 7x less often.