Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Incomplete
-
None
-
None
-
Linux EC2, different VPC
Description
SPARK_LOCAL_IP does not bind to the provided IP on slaves.
When launching a job or a spark-shell from a second network, the returned IP for the slave is still the first IP of the slave.
So the job fails with the message :
Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
It is not a question of resources but the driver which cannot connect to the slave given the wrong IP.