Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
1.1.0
-
None
-
None
Description
In the spark-submit script, the lines below:
elif [ "$1" = "--driver-memory" ]; then
export SPARK_SUBMIT_DRIVER_MEMORY=$2
are wrong: spark-submit is not the process that will handle the driver when you're in yarn-cluster mode. So, when I lanch spark-submit on a light server with only 2Gb of memory and want to allocate 4gb of memory to the driver (that will run in the ressource manager on a big fat yarn server with, say, 64Gb of RAM) spark submit fails with a OutOfMemory.
Attachments
Issue Links
- duplicates
-
SPARK-3884 If deploy mode is cluster, --driver-memory shouldn't apply to client JVM
- Resolved