Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.0.0
-
None
Description
For the first version of accelerator aware scheduling(SPARK-27495), the SPIP had a condition that we can support dynamic allocation because we were going to have a strict requirement that we don't waste any resources. This means that the number of number of slots each executor has could be calculated from the number of cores and task cpus just as is done today.
Somewhere along the line of development we relaxed that and only warn when we are wasting resources. This breaks the dynamic allocation logic if the limiting resource is no longer the cores. This means we will request less executors then we really need to run everything.
We have to enforce that cores is always the limiting resource so we should throw if its not.
I guess we could only make this a requirement with dynamic allocation on, but to make the behavior consistent I would say we just require it across the board.
Attachments
Issue Links
- is related to
-
SPARK-30446 Accelerator aware scheduling checkResourcesPerTask doesn't cover all cases
- Resolved
- relates to
-
SPARK-24615 SPIP: Accelerator-aware task scheduling for Spark
- Resolved
- links to