When testing a Spark job with fault tolerant scanners enabled, reading a large table (~1.5TB replicated) with many columns resulted in using up all of the memory on the tablet servers. 400 GB of total memory was being consumed though the memory limit was configured for 60 GB. This impacted all services on the machines making the cluster effectively unusable. Killing the job running the scans did not free the memory. However, restarting the Tablet servers resulted in a healthy cluster.
Based on a chat with Todd Lipcon, Jean-Daniel Cryans, and Mike Percy it looks like we are not lazy in MergeIterator initialization and we could fix this by being lazy about the merger based on rowset bounds. Limiting the number of concurrently open scanners to O(rowset height).