Description
if spark.cores.max = 10 for example and spark.executor.cores = 4, 2 executors will get launched thus totalCoresAcquired = 8. All future Mesos offers will not get tasks launched because sc.conf.getInt("spark.executor.cores", ...) + totalCoresAcquired <= maxCores will always evaluate to false. However, in handleMatchedOffers we check if totalCoresAcquired >= maxCores to determine if we should decline the offer "for a configurable amount of time to avoid starving other frameworks", and this will always evaluate to false in the above scenario. This leaves the framework in a state of limbo where it will never launch any new executors, but only decline offers for the Mesos default of 5 seconds, thus starving other frameworks of offers.
Attachments
Issue Links
- is related to
-
SPARK-12554 Standalone mode may hang if max cores is not a multiple of executor cores
- Resolved
-
SPARK-19702 Increasse refuse_seconds timeout in the Mesos Spark Dispatcher
- Resolved
-
MESOS-6112 Frameworks are starved when > 5 are run concurrently
- Resolved
- links to