I have a situation which demands 2 core merging, re-create data partitions, split & install in 2(or more) cores, seems like this place has got somewhat things closer in that area, basically the case is that there are 2 cores on same schema roughly of 55G and 35G(and growing) each and data keeps on getting pushed continuously on 35G core, we can't allow it to get filled infinitely so essentially over a period of time(offline period/maintenance period) we regenrate(by re-indexing to a fresh core) both the cores with the desired set of data keyed on some unique key, discard the old oversized cores and install the fresh ones, re-indexing is a kind of pain and eventually it'll create the same set of documents but the older core will loose too older docs due to size constraint and the smaller core would be further shrinked as it'll probably be holding lesser documents due to docs getting shifted to bigger one, this can be considered as a sliding time window based core, so the basic steps in demand could be:
1.) Merge N cores to 1 big core(high cost).
2.) Scan through all the documents of the big core and create N(num of cores that were merged initially) new cores till allowed size by the side.
3.) Hot swap the main cores with the fresh ones.
4.) Discard the old cores probably after backing it up.
Above 1 may be omitted if we can directly scan through documents of N cores and keep on pushing the new docs over to target cores.