In an investigation from some time ago, 4GB of heap were needed for transferring 1GB blob and 6GB for 2GB blob. This was in part due to using addTestContent  in the investigation, which allocates a huge byte on the heap.
OAK-5902 introduced chunking for transferring blobs between primary and standby. This way, the memory needed for syncing a big blob should be around the chunk size used. Solving the way test data is created, it should be possible to transfer a big blob (e.g. 2.5 GB) with less memory.