The Lucene file system abstraction (package org.apache.jackrabbit.oak.plugins.index.lucene.directory) opens files from the datastore if copyOnRead is disabled. This can lead to "Too many open files".
Background: Lucene opens one input (OakIndexInput), and then clones it. In the Oak directory, each clone opens a new file (OakIndexFile). Lucene only ever closes the "top" input. The clones are not closed. But closing the top needs to ensure all clones are closed. The Oak directory uses a weak reference map of clones (actually only the first level of clones). Lucene creates lots of clones, and clones of clones. This could be thousands of them (probably). The top input can stay open for a long time. As clones are not closed by Lucene, they stay open as well. Sometimes they are garbage collected by the JVM, which closes the file (FileInputStream has a finalizer). But finalizers are not run all that often, so many files stay open.
A good solution would be to limit the number of open files, in the following way:
- Use a static queue, and if the queue grows too large, close the files that were not used the longest time ago.
- Closed files after 2 seconds or so.
- Only if needed, reopen the files.
Better yet would be to use some kind of in-memory caching mechanism in the datastore, and limit the number of open files there. But that's probably more complicated.