Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
None
-
None
Description
Currently in fine grain mode as long as there is resources available that matches the scheduler requirement Spark will try to use the resources offered by mesos, which means to excessive resource usage that can lead to other frameworks not able to get their fair share.
We should add a option to cap the number of executors launched, so the combination of spark.task.cpus and spark.mesos.executor.max is the total amount of cores it will grab.