So here's an overview how the Replicator works (it's also document under oal.replicator.package.html):
At a high-level, producers (e.g. indexer) publish Revisions, and consumers update to the latest Revision available. Like SVN, if a client is on rev1 and the server has rev4, the next update request will upgrade the client to rev4, skipping all intermediate revisions.
The Replicator offers two implementations at the moment: LocalReplicator to be used by at the server side and HttpReplicator to be used by clients to e.g. update over HTTP. In the future, we may want to add other Replicator implementations, e.g. rsync, torrent... for HTTP, the package also provides a ReplicationService which acts on the Http servlet request/response following some API specification. In that sense, the HttpReplicator expects a certain HTTP impl on the server side, so ReplicationService helps you by implementation that API. The reason it's not a servlet is so that you can plug it into your application servlet freely.
A Revision is basically a list of files and sources. For example, IndexRevision contains the list of files in an IndexCommit (and only one source), while IndexAndTaxonomyRevision contains the list of files from both IndexCommits with corresponding sources (index/taxonomy). When the server publishes either of these two revision, the IndexCommits are snapshotted so that files aren't deleted, and the Replicator serves file requests (by clients) from the Revision. The Revision is also responsible for releasing itself – this is done automatically by the Replicator which releases a revision when it's no longer needed (i.e. there's a new one already) and there are no clients that currently replicate its files.
On the client side, the package offers a ReplicationClient class which can be invoked either manually, or start its update-thread to periodically check for updates. The client is given a ReplicationHandler (two matching implementations: IndexReplicationHandler and IndexAndTaxonomyReplicationHandler) which is responsible to act on the replicated files. The client first obtains all needed files (i.e. those that the new Revision offers, and the client is still missing), and after they were all successfully copied over, the handler is invoked. Both handlers copy the files from their temporary location to the index directories, fsync them and kiss the index such that unused files are deleted. You can provide each handler a Callable which is invoked after the index has been safely and successfully updated, so you can e.g. searcherManager.maybeReopen().
Here's a general code example that explains how to work with the Replicator:
IndexWriter publishWriter; Replicator replicator = new LocalReplicator();
Callable<Boolean> callback = null; ReplicationHandler handler = new IndexReplicationHandler(indexDir, callback);
SourceDirectoryFactory factory = new PerSessionDirectoryFactory(workDir);
ReplicationClient client = new ReplicationClient(replicator, handler, factory);
The package of course comes with unit tests, though I'm sure there's room for improvement (there always is!).