Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-8941

Standalone cluster worker does not accept multiple masters on launch

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Duplicate
    • 1.4.0, 1.4.1
    • None
    • Deploy, Documentation
    • None

    Description

      Before 1.4 it was possible to launch a worker node using a comma separated list of master nodes.

      ex:
      sbin/start-slave.sh 1 "spark://localhost:7077,localhost:7078"
      starting org.apache.spark.deploy.worker.Worker, logging to /Users/jesper/Downloads/spark-1.4.0-bin-cdh4/sbin/../logs/spark-jesper-org.apache.spark.deploy.worker.Worker-1-Jespers-MacBook-Air.local.out
      failed to launch org.apache.spark.deploy.worker.Worker:
      Default is conf/spark-defaults.conf.
      15/07/09 12:33:06 INFO Utils: Shutdown hook called

      Spark 1.2 and 1.3.1 accepts multiple masters in this format.

      update: start-slave.sh only expects master lists in 1.4 (no instance number)

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              koudelka Jesper Lundgren
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: