Uploaded image for project: 'Jackrabbit Oak'
  1. Jackrabbit Oak
  2. OAK-8950

DataStore: FileCache should use one cache segment

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 1.26.0
    • Component/s: blob
    • Labels:
      None

      Description

      The FileCache in the caching data store (Azure, S3) uses the default segment count of 16. The effect of that is:

      • if the maximum cache size is e.g. 16 GB
      • and there are e.g. 15 files 1 GB each (total 15 GB),
      • it can happen that some files are evicted, 
      • because internally the cache is using 16 segments of 1 GB each,
      • and by chance 2 files could be in the same segment,
      • so that one of those files is evicted

      The workaround is to use a really large cache size (e.g. 100 GB if you only want 15 GB of cache size), but the drawback is that, if most files are very small, that the cache size could become actually 100 GB.

      The best solution is probably to use only 1 segment. There is tiny a concurrency issue: right now, deleting files is synchronized on the segment. But I think that's not a big problem (to be tested).

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                thomasm Thomas Mueller
                Reporter:
                thomasm Thomas Mueller
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: