Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
3.4
-
None
-
None
-
New
Description
I am witnessing an apparent leak in the memory tracking used to determine when a flush is necessary.
Over time, this will result in every single document being flushed into its own segment as the memUsage will remain above the configured buffer size, causing a flush to be triggered after every add/update.
Best I can figure, this is being caused by TermsHashPerField's tracking of memory usage for postingsHash and/or postingsArray combined with multi-threaded feeding.
I suspect that the TermsHashPerField's postingsHash is growing in one thread, then, when a segment is flushed, a single, different thread will merge all TermsHashPerFields in FreqProxTermsWriter and then call shrinkHash(). I suspect this call of shrinkHash() is seeing an old postingsHash array, and subsequently not releasing all the memory that was allocated.
If this is the case, I am also concerned that FreqProxTermsWriter will not write the correct terms into the index, although I have not confirmed that any indexing problem occurs as of yet.
NOTE: i am witnessing this growth in a test by subtracting the amount or memory allocated (but in a "free" state) by perDocAllocator/byteBlockAllocator/charBlocks/intBlocks from DocumentsWriter.memUsage.get() in IndexWriter.doAfterFlush()
I will see this stay at a stable point for a while, then on some flushes, i will see this grow by a couple of bytes, and all subsequent flushes will never go back down the the previous state
I will continue to investigate and post any additional findings