BIGTOP-2795, I discovered that the Zeppelin charm could easily foul up its Spark configuration. The charm was designed to be deployed by itself, with hadoop, with spark, or with hadoop+spark. There are a few problems:
- Deploy zepp by itself; the first spark context job you process will fail. By default, the charm's spark configuration has an hdfs event log. No bueno if zepp is not related to hadoop.
- Relate zepp to a standalone spark. Reconfigure that spark for yarn mode, and you'll break zepp. That's because Zepp was never related to hadoop, and therefore doesn't know how to deal with its local spark driver in 'yarn' mode.
- Relate zepp to a standalone spark, and then remove that relation. You'll end up with zepp's spark driver in yarn-client mode. Yuck.
As you can tell, Zeppelin really wants to be in yarn mode, but by golly, there's no real need for that. Let's handle the charm states (and state reactions) better.