You're still okay with an API that allows you to reopen IRs on different directories?
Well, that's no good - we can catch this and throw an exc?
I don't understand why should we bother with checking and throwing exceptions, when we can prevent such things from compiling at all.
By using an API, that doesn't support reopening on anything different from original source.
Really, there are two separate "things" open/reopen needs:
That's not true. Take a look at my WriterBackedReader above (or DirectoryReader in trunk). It requires writer at least to call deleteUnusedFiles(), nrtIsCurrent().
So you can't easily reopen between Directory-backed and Writer-backed readers without much switching and checking.
r_ram.reload(); //Here we want to reload from the FSDirecotory?
Use MMapDirectory? It's only a bit slower for searches, while not raping your GC on big indexes.
Also check this out - https://gist.github.com/715617 , it is a RAMDirectory offspring that wraps any other given directory and basically does what you want (if I guessed right).
It doesn't use blocking for files, so file size limit is 2Gb, but this can be easily fixed. On the up side - it reads file into memory only after the size is known (unlike RAMDir), which allows you to use huge precisely-sized blocks, lessening GC pressure.
I used it for a long time, but then my indexes grew, heaps followed, VM exploded and I switched to MMapDirectory (with minor patches).
What is missing is a "signal" from IR.reload() to RAMdirectory to slurp fresh information from FSDirecory?
There is zero need for any such signal. If a reader requests non-existing file from RAMDirectory, it should check backing dir before throwing exception. If backing dir does have the file - it is loaded and opened.
Why do you people love complicating things that much?