MongoDocumentStore uses the default batch size, which means MongoDB will initially get 100 documents and then as many documents as fit into 4MB. Depending on the document size, the number of documents may be quite high and the risk of running into the 60 seconds query timeout defined by Oak increases.
Tuning the batch size (or using a limit) may also be helpful in optimizing the amount of data transferred from MongoDB to Oak. The DocumentNodeStore fetches child nodes in batches as well. The logic there is slightly different. The initial batch size is 100 and every subsequent batch doubles in size until it reaches 1600. Bandwidth is wasted if the MongoDB Java driver fetches way more than requested by Oak.