Affects Version/s: 1.6.0, 1.6.1
Fix Version/s: 2.0.0
While debugging performance issues in a Spark program, I've found a simple way to slow down Spark 1.6 significantly by filling the RDD memory cache. This seems to be a regression, because setting "spark.memory.useLegacyMode=true" fixes the problem. Here is a repro that is just a simple program that fills the memory cache of Spark using a MEMORY_ONLY cached RDD (but of course this comes up in more complex situations, too):
If I call it the following way, it completes in around 5 minutes on my Laptop, while often stopping for slow Full GC cycles. I can also see with jvisualvm (Visual GC plugin) that the old generation of JVM is 96.8% filled.
If I add any one of the below flags, then the run-time drops to around 40-50 seconds and the difference is coming from the drop in GC times:
All the other cache types except for DISK_ONLY produce similar symptoms. It looks like that the problem is that the amount of data Spark wants to store long-term ends up being larger than the old generation size in the JVM and this triggers Full GC repeatedly.
I did some research:
- In Spark 1.6, spark.memory.fraction is the upper limit on cache size. It defaults to 0.75.
- In Spark 1.5, spark.storage.memoryFraction is the upper limit in cache size. It defaults to 0.6 and...
- http://spark.apache.org/docs/1.5.2/configuration.html even says that it shouldn't be bigger than the size of the old generation.
- On the other hand, OpenJDK's default NewRatio is 2, which means an old generation size of 66%. Hence the default value in Spark 1.6 contradicts this advice.
http://spark.apache.org/docs/1.6.1/tuning.html recommends that if the old generation is running close to full, then setting spark.memory.storageFraction to a lower value should help. I have tried with spark.memory.storageFraction=0.1, but it still doesn't fix the issue. This is not a surprise: http://spark.apache.org/docs/1.6.1/configuration.html explains that storageFraction is not an upper-limit but a lower limit-like thing on the size of Spark's cache. The real upper limit is spark.memory.fraction.
To sum up my questions/issues:
- At least http://spark.apache.org/docs/1.6.1/tuning.html should be fixed. Maybe the old generation size should also be mentioned in configuration.html near spark.memory.fraction.
- Is it a goal for Spark to support heavy caching with default parameters and without GC breakdown? If so, then better default values are needed.