The Spark charm can be reconfigured at runtime to work in local, standalone, standalone-HA (with zookeeper), or yarn modes. In any of these modes, the user can run things like SparkPI to verify functionality. There are a few problems with this in the current charm:
- transitioning out of HA mode does not reset the zk connection string correctly
- transitioning into HA mode before the zk ensemble has settled can cause the spark master to fetch invalid data from zk; ensure zk is settled before starting spark master
- master/worker services are always started; ensure we start services relevant for the execution mode 
- spark-examples.jar location has changed in spark2
- the add-on benchmark suite (sparkbench from ibm) is not compatible with spark 2.1; include a pagerank benchmark that works across 1.5 and 2.1
To ease future maintenance, the spark charm needs a good refactoring to simplify the reactive logic. It's currently too hard to follow what is happening when the deployment topology changes.
 - this may belong in the spark puppet recipe, but it's easy enough to fix in the charm while investigating a better place.