Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Duplicate
-
1.4.0, 1.4.1
-
None
-
None
Description
Before 1.4 it was possible to launch a worker node using a comma separated list of master nodes.
ex:
sbin/start-slave.sh 1 "spark://localhost:7077,localhost:7078"
starting org.apache.spark.deploy.worker.Worker, logging to /Users/jesper/Downloads/spark-1.4.0-bin-cdh4/sbin/../logs/spark-jesper-org.apache.spark.deploy.worker.Worker-1-Jespers-MacBook-Air.local.out
failed to launch org.apache.spark.deploy.worker.Worker:
Default is conf/spark-defaults.conf.
15/07/09 12:33:06 INFO Utils: Shutdown hook called
Spark 1.2 and 1.3.1 accepts multiple masters in this format.
update: start-slave.sh only expects master lists in 1.4 (no instance number)
Attachments
Issue Links
- duplicates
-
SPARK-9007 start-slave.sh changed API in 1.4 and the documentation got updated to mention the old API
- Resolved
-
SPARK-6443 Support HA in standalone cluster mode
- Closed