Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
Description
The FileCache in the caching data store (Azure, S3) uses the default segment count of 16. The effect of that is:
- if the maximum cache size is e.g. 16 GB
- and there are e.g. 15 files 1 GB each (total 15 GB),
- it can happen that some files are evicted,
- because internally the cache is using 16 segments of 1 GB each,
- and by chance 2 files could be in the same segment,
- so that one of those files is evicted
The workaround is to use a really large cache size (e.g. 100 GB if you only want 15 GB of cache size), but the drawback is that, if most files are very small, that the cache size could become actually 100 GB.
The best solution is probably to use only 1 segment. There is tiny a concurrency issue: right now, deleting files is synchronized on the segment. But I think that's not a big problem (to be tested).
Attachments
Issue Links
- relates to
-
OAK-6303 Cache in CachingBlobStore might grow beyond configured limit
- Open