Keeping track of not-yet-sync'd files instead of sync'd files is
better, but it still requires upkeep (ie when file is deleted you have
to remove it) because files can be opened, written to, closed, deleted
without ever being sync'd.
And I like moving this tracking under Dir – that's where it belongs.
I assume that on calling syncEveryoneAndHisDog() you should sync all files that have been written to, and were closed, and not yet deleted.
This will over-sync in some situations.
Ie, causing commit to take longer than it should.
EG say a merge has finished with the first set of files (say _X.fdx/t,
since it merges fields first) but is still working on postings, when
the user calls commit. We should not then sync _X.fdx/t because they
are unreferenced by the segments_N we are committing.
Or the merge has finished (so _X.* has been created) but is now off
building the _X.cfs file – we don't want to sync _X.*, only _X.cfs
when its done.
Another example: we don't do this today, but, addIndexes should really
run fully outside of IW's normal segments file, merging away, and then
only on final success alter IW's segmentInfos. If we switch to that,
we don't want to sync all the files that addIndexes is temporarily
The knowledge of which files "make up" the transaction lives above
Directory... so I think we should retain the per-file control.
I proposed the bulk-sync API so that Dir impls could choose to do a
system-wide sync. Or, more generally, any Dir which can be more
efficient if it knows the precise set of files that must be sync'd
If we stick with file-by-file API, doing a system-wide sync is
somewhat trickier... because you can't assume from one call to the
next that nothing had changed.
Also, bulk sync better matches the semantics IW/IR require: these
consumers don't care the order in which these files are sync'd. They
just care that the requested set is sync'd. So it exposes a degree of
freedom to the Dir impls that's otherwise hidden today.