Details
-
Bug
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
0.9.6
-
None
-
None
Description
The number of worker nodes in Spark distributed mode, which are specified by the MRQL -nodes parameter, must set the parameters SPARK_WORKER_INSTANCES (called SPARK_EXECUTOR_INSTANCES in Spark 1.3.*) and SPARK_WORKER_CORES; otherwise, Spark will always use all the available cores in the cluster.