Details
Description
Currently in docs:
spark.mesos.fetcherCache.enable / false / If set to `true`, all URIs (example: `spark.executor.uri`, `spark.mesos.uris`) will be cached by the Mesos Fetcher Cache
Currently in MesosClusterScheduler.scala (which passes parameter to driver):
private val useFetchCache = conf.getBoolean("spark.mesos.fetchCache.enable", false)
Currently in MesosCourseGrainedSchedulerBackend.scala (which passes mesos caching parameter to executors):
private val useFetcherCache = conf.getBoolean("spark.mesos.fetcherCache.enable", false)
This naming discrepancy dates back to version 2.0.0 (jira).
This means that when spark.mesos.fetcherCache.enable=true is specified, the Mesos cache will be used only for executors, and not for drivers.
IMPACT:
Not caching these driver files (typically including at least spark binaries, custom jar, and additional dependencies) adds considerable overhead network traffic and startup time when frequently running spark Applications on a Mesos cluster. Additionally, since extracted files like spark-x.x.x-bin-*.tgz are additionally copied and left in the sandbox with the cache off (rather than extracted directly without an extra copy), this can considerably increase disk usage. Users CAN currently workaround by specifying the spark.mesos.fetchCache.enable option, but this should at least be specified in the documentation.
SUGGESTED FIX:
Add spark.mesos.fetchCache.enable to the documentation for versions 2 - 2.4, and update MesosClusterScheduler.scala to use spark.mesos.fetcherCache.enable going forward (literally a one-line change).
Attachments
Attachments
Issue Links
- is caused by
-
SPARK-15994 Allow enabling Mesos fetch cache in coarse executor backend
- Resolved
- relates to
-
SPARK-26192 MesosClusterScheduler reads options from dispatcher conf instead of submission conf
- Resolved
- links to