Interleaving is an approach to Online Search Quality evaluation that can be very useful for Learning To Rank models:
Scope of this issue is to introduce the ability to the LTR query parser of accepting multiple models (2 to start with).
If one model is passed, normal reranking happens.
If two models are passed, reranking happens for both models and the final reranked list is the interleaved sequence of results coming from the two models lists.
As a first step it is going to be implemented through:
TeamDraft Interleaving with two models in input.
In the future, we can expand the functionality adding the interleaving algorithm as a parameter.