We only need to throw an exception if not Accountable for items being put in the cache right?
Old items that are bring removed (or overwritten) must be Accountable since they got added somehow.
Also, it's going to be relatively easy to blow this out of the water on purpose, or even by accident.
1) Do facet.method=enum on a high cardinality field like "ID" and thus put a million small items in the cache.
2) start searching normally and the cache size will stay at a million, regardless of the size of items we put in (since removeEldestEntry is only called once for each put).
After a quick browse of LinkedHashMap, I didn't see an obvious easy/fast way to remove the oldest entry, so I'm not sure how to fix.
For the calculation of amount of RAM taken up... perhaps we should estimate the minimum that a key + internal map node would take up?
For the query cache in particular, it's going to be common for query keys to take up more memory than the actual DocSlice.