Description
Yarn currently reads from SPARK_MASTER_MEMORY and SPARK_WORKER_MEMORY. If you have these settings leftover from a standalone cluster setup and you try to run Spark on Yarn on the same cluster, then your executors suddenly get the amount of memory specified through SPARK_WORKER_MEMORY.
This behavior is due in large part to backward compatibility. However, we should log a warning against the use of these variables at the very least.