Zeppelin has the Scala interpreter assigned as the default for Spark notebooks. This default setting creates an additional step if you are to write code using PySpark. You will need to insert a %pyspark at the beginning of each row of the notebook, for Zeppelin to understand that this is PySpark code. As described in here (in zeppelin.interpreters property).
However switching org.apache.zeppelin.spark.SparkInterpreter and org.apache.zeppelin.spark.PySparkInterpreter order doesn't change the default interpreter in Zeppelin notebook. In short, I can't use pyspark interpreter without %pyspark annotation even if I updated zeppelin-site.xml and restarted Zeppelin server.
This issue was reported originally in here.