Description
If user set spark.executor.cores to be less than spark.task.cpus, task scheduler will fall in infinite loop, we should throw an exception.in that case.
In standalone and mesos mode, we should respect spark.task.cpus too, and I will file another JIRA to solve that.
Attachments
Issue Links
- is related to
-
SPARK-5337 respect spark.task.cpus when launch executors
- Resolved
- links to