Details
-
Improvement
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
0.9.0
-
None
Description
When zeppelin is running under k8s mode, it will create the interpreter pod through "k8s/interpreter/100-interpreter-spec.yaml". Unfortunately, it currently only supports the interpreter pod to request for CPU and memory resources. When users need to use some deep learning libraries (e.g., tensorflow), they hope that the interpreter pod can be scheduled to a machine with gpu resources.