The usual design is a queued ingestion pipeline, where a pool of indexer threads take docs out of a queue and feed them to an IndexWriter, I think?
Mainly, because I think apps with such an affinity that you describe are very rare?
Hmm I suspect it's not that rare.... yes one design is a single
indexing queue w/ dedicated thread pool only for indexing, but a push
model is equal valid, where your app already has separate threads (or
thread pools) servicing different content sources, so when a doc
arrives to one of those source-specific threads, it's that thread that
indexes it, rather than handing off to a separately pool.
Lucene is used in a very wide variety of apps – we shouldn't optimize
the indexer on such hard app specific assumptions.
And if a user really has so different docs, maybe the right answer would be to have more than one single index?
Hmm but the app shouldn't have to resort to this... (it doesn't have
But... could we allow an add/updateDocument call to express this
affinity, explicitly? If you index homogenous docs you wouldn't use
it, but, if you index drastically different docs that fall into clear
"categories", expressing the affinity can get you a good gain in
This may be the best solution, since then one could pass the affinity
even through a thread pool, and then we would fallback to thread
binding if the document class wasn't declared?
I mean this is virtually identical to "having more than one index",
since the DW is like its own index. It just saves some of the
copy-back/merge cost of addIndexes...
Even if today an app utilizes the thread affinity, this only results in maybe somewhat faster indexing performance, but the benefits would be lost after flusing/merging.
Yes this optimization is only about the initial flush, but, it's
potentially sizable. Merging matters less since typically it's not
the bottleneck (happens in the BG, quickly enough).
On the right apps, thread affinity can make a huge difference. EG if
you allow up to 8 thread states, and the threads are indexing content
w/ highly divergent terms (eg, one language per thread, or, docs w/
very different field names), in the worst case you'll be up to 1/8 as
efficient since each term must now be copied in up to 8 places
instead of one. We have a high per-term RAM cost (reduced thanks to
the parallel arrays, but, still high).
If we assign docs randomly to available DocumentsWriterPerThreads, then we should on average make good use of the overall memory?
It really depends on the app – if the term space is highly thread
dependent (above examples) you an end up flush much more frequently for
a given RAM buffer.
Alternatively we could also select the DWPT from the pool of available DWPTs that has the highest amount of free memory?
Hmm... this would be kinda costly binder? You'd need a pqueue?
Thread affinity (or the explicit affinity) is a single
map/array/member lookup. But it's an interesting idea...
If you do have a global RAM management, how would the flushing work? E.g. when a global flush is triggered because all RAM is consumed, and we pick the DWPT with the highest amount of allocated memory for flushing, what will the other DWPTs do during that flush? Wouldn't we have to pause the other DWPTs to make sure we don't exceed the maxRAMBufferSize?
The other DWs would keep indexing That's the beauty of this
approach... a flush of one DW doesn't stop all other DWs from
indexing, unliked today.
And you want to serialize the flushing right? Ie, only one DW flushes
at a time (the others keep indexing).
Hmm I suppose flushing more than one should be allowed (OS/IO have
alot of concurrency, esp since IO goes into write cache)... perhaps
that's the best way to balance index vs flush time? EG we pick one to
flush @ 90%, if we cross 95% we pick another to flush, another at
Of course we could say "always flush when 90% of the overall memory is consumed", but how would we know that the remaining 10% won't fill up during the time the flush takes?
Regardless of the approach for document -> DW binding, this is an
issue (ie it's non-differentiating here)? Ie the other DWs continue
to consume RAM while one DW is flushing. I think the low/high water
mark is an OK solution here? Or the tiered flushing (I think I like
that better ).
Having a fully decoupled memory management is compelling I think, mainly because it makes everything so much simpler. A DWPT could decide itself when it's time to flush, and the other ones can keep going independently.
I'm all for simplifying things, which you've already nicely done here,
but not of it's at the cost of a non-trivial potential indexing perf
loss. We're already taking a perf hit here, since the doc stores
can't be shared... I think that case is justifiable (good