There are a few memory limits that people hit often and that we could make higher, especially now that memory sizes have grown.
- spark.akka.frameSize: This defaults at 10 but is often hit for map output statuses in large shuffles. AFAIK the memory is not fully allocated up-front, so we can just make this larger and still not affect jobs that never sent a status that large.
- spark.executor.memory: Defaults at 512m, which is really small. We can at least increase it to 1g, though this is something users do need to set on their own.