Description
I launched a Spark cluster using the new EC2 scripts, stopped it with stop, then restarted it with start and ran launch --resume to re-deploy the Spark configurations.
It looks like the script exits after initializing Spark if it's already installed:
Initializing spark ~ ~/spark-ec2 Spark seems to be installed. Exiting. Connection to ec2-*-.amazonaws.com closed. Spark standalone cluster started at http://ec2-*.compute-1.amazonaws.com:8080 Ganglia started at http://ec2-*-1.amazonaws.com:5080/ganglia Done!
It looks like the problem is that the module init scripts are run by sourcing them into the top-level setup.sh script rather than running them in subshells, so the module script's own exit command kills the top-level setup script:
# Install / Init module for module in $MODULES; do echo "Initializing $module" if [[ -e $module/init.sh ]]; then source $module/init.sh fi done