Details
-
New Feature
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
Goal
Make Zeppelin run on Kubernetes environment.
- Run Zeppelin daemon as a Deployment, with RBAC to create/delete Pods for interpreters
- Run Standard interpreters as Pods
- Run Spark interpreter with Spark Cluster deployed in Kubernetes cluster.
How it works
- Zeppelin-daemon is deployed in Kubernetes with necessary Role (RBAC).
e.g. kubectl apply -f ${ZEPPELIN_HOME}/k8s/zeppelin.yaml - Zeppelin-daemon automatically configure itself to use K8sStandardInterpreterLauncher, K8sSparkInterpreterLauncher instead of StandardInterpreterLauncher, SparkInterpreterLauncher.
- K8sStandardInterpreterLauncher run an interpreter as a Pod
- K8sSparkInterpreterLauncher run spark interpreter with Spark cluster in the Kubernetes cluster.
So user can start to use Zeppelin on Kubernetes with zero configuration.
Customize the interpreter pod
User can easily modify, extend zeppelin.yaml as their needs. (like mount volume to persist configuration and notebook, etc) To provide the same customization capability to interpreter pod, Zeppelin stores interpreter pod spec (yaml) files in the directory "${ZEPPELIN_HOME}/k8s/interpreter/", and all yaml files in the directory. So user can modify pod spec file or add more.
Spark interpreter in Kubernetes
Spark Interpreter not only runs itself in Kubernetes as a Pod, but also creates Spark cluster. Spark-summit can deploy spark cluster as well in Kubernetes. see https://spark.apache.org/docs/2.3.0/running-on-kubernetes.html. Also there's a PR we can check https://github.com/apache/zeppelin/pull/2637.
Attachments
Issue Links
- relates to
-
ZEPPELIN-3954 Use "kubectl create -f" instead of "kubectl apply -f"
- Open
- links to