Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Won't Fix
-
1.3.0
-
None
-
None
-
EC2, SPARK 1.3.0 cluster setup in vpc/subnet.
Description
Steps to start a Spark cluster with EC2 scripts
1. I created an ec2 instance in the vpc, and subnet. Amazon Linux
2. I dowloaded spark-1.3.0
3. chmod 400 key file
4. Export aws access and secret keys
5. Now ran the command
./spark-ec2 --key-pair=deepali-ec2-keypair --identity-file=/home/ec2-user/Spark/deepali-ec2-keypair.pem --region=us-west-2 --zone=us-west-2b --vpc-id=vpc-03d67b66 --subnet-id=subnet-72fd5905 --resume launch deepali-spark-nodocker
6. The master and slave instances are created but cannot ssh says host not resolved.
7. I can ping the master and slave, I can ssh from the command line, but not from the ec2 scripts.
8. I have spent more than 2 days now. But no luck yet.
9. The ec2 scripts dont work .. code has a bug in referencing the cluster nodes via the wrong hostnames
SCREEN CONSOLE log
./spark-ec2 --key-pair=deepali-ec2-keypair --identity-file=/home /ec2-user/Spark/deepali-ec2-keypair.pem --region=us-west-2 --zone=us-west-2b --vpc-id=vpc-03d67b6 6 --subnet-id=subnet-72fd5905 launch deepali-spark-nodocker
Downloading Boto from PyPi
Finished downloading Boto
Setting up security groups...
Creating security group deepali-spark-nodocker-master
Creating security group deepali-spark-nodocker-slaves
Searching for existing cluster deepali-spark-nodocker...
Spark AMI: ami-9a6e0daa
Launching instances...
Launched 1 slaves in us-west-2b, regid = r-0d2088fb
Launched master in us-west-2b, regid = r-312088c7
Waiting for AWS to propagate instance metadata...
Waiting for cluster to enter 'ssh-ready' state...........
Warning: SSH connection error. (This could be temporary.)
Host: None
SSH return code: 255
SSH output: ssh: Could not resolve hostname None: Name or service not known