Our codes is following:
We have 31 index dirs in /tmp (there are about 5 million records in our indexs), we want to real time search, so we use the writer.getReader() to get the real time reader.
Then we did the search repeatly, finally java 'out of memory' issue will happen(about 10 minites).
keywordQuery = QueryParser(Version.LUCENE_CURRENT,"content", StandardAnalyzer(Version.LUCENE_CURRENT)).parse("when AND you")
writers = 
for i in range(1,32):
dir = os.path.join("/tmp",str)
luceneDir = SimpleFSDirectory(File(dir))
writer = IndexWriter(luceneDir, StandardAnalyzer(Version.LUCENE_CURRENT), False,IndexWriter.MaxFieldLength.LIMITED)
searchersList = 
readers = 
for writer in writers:
reader = writer.getReader()
searcher = IndexSearcher(reader)
multiSearcherInstance = MultiSearcher(searchersList)
docs = multiSearcherInstance.search(keywordQuery,IndexerCons.TOP_DOC_NUMBER).scoreDocs
for searcher in searchersList:
for reader in readers:
Then we use the normal reader (directly open from the dirs) instead of the real time reader, the test is OK, no 'out of memory' issue.
The bug maybe come from java lucene, i don't sure.