Description
If you start a cluster in a non-default region using the EC2 scripts and then try to destroy it, you get the message:
Terminating master...
Terminating slaves...
after which the script terminates with no further info.
This leaves the instances still running without ever informing the user.
The reason this happens is that the destroy action in spark_ec2.py calls get_existing_cluster with the die_on_error argument set to False for some reason.
I'll submit a PR for this.