Uploaded image for project: 'Jackrabbit Oak'
  1. Jackrabbit Oak
  2. OAK-7460

OakDirectory: limit the number of open files

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • lucene, query
    • None

    Description

      The Lucene file system abstraction (package org.apache.jackrabbit.oak.plugins.index.lucene.directory) opens files from the datastore if copyOnRead is disabled. This can lead to "Too many open files".

      Background: Lucene opens one input (OakIndexInput), and then clones it. In the Oak directory, each clone opens a new file (OakIndexFile). Lucene only ever closes the "top" input. The clones are not closed. But closing the top needs to ensure all clones are closed. The Oak directory uses a weak reference map of clones (actually only the first level of clones). Lucene creates lots of clones, and clones of clones. This could be thousands of them (probably). The top input can stay open for a long time. As clones are not closed by Lucene, they stay open as well. Sometimes they are garbage collected by the JVM, which closes the file (FileInputStream has a finalizer). But finalizers are not run all that often, so many files stay open.

      A good solution would be to limit the number of open files, in the following way:

      • Use a static queue, and if the queue grows too large, close the files that were not used the longest time ago.
      • Closed files after 2 seconds or so.
      • Only if needed, reopen the files.

      Better yet would be to use some kind of in-memory caching mechanism in the datastore, and limit the number of open files there. But that's probably more complicated.

      Attachments

        Activity

          People

            Unassigned Unassigned
            thomasm Thomas Mueller
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: