Details
-
Improvement
-
Status: Resolved
-
Minor
-
Resolution: Duplicate
-
1.0.0
-
None
-
None
-
Ubuntu precise, on YARN (CDH 5.1.0)
Description
It would be useful to allow specifying --num-executors * when submitting jobs to YARN, and to have Spark automatically determine how many total cores are available in the cluster by querying YARN.
Our scenario is multiple users running research batch jobs. We never want to have a situation where cluster resources aren't being used, so ideally users would specify * and let YARN scheduling and preemption ensure fairness.
Attachments
Issue Links
- duplicates
-
SPARK-3183 Add option for requesting full YARN cluster
- Resolved