Description
I tried to come up with a more succinct title.
The issue only happens if `spark.executor.cores` is not set. Right now if we have a worker with 8G, and we set `spark.executor.memory` to 1G, then the executor launched on the worker can have at most 8 cores, even if the worker has more cores available.
This is caused by the fix in SPARK-8881.