Uploaded image for project: 'Zeppelin'
  1. Zeppelin
  2. ZEPPELIN-5443

Allow the interpreter pod to request the gpu resources under k8s mode

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 0.9.0
    • 0.9.1, 0.10.0
    • Kubernetes
    • None

    Description

      When zeppelin is running under k8s mode, it will create the interpreter pod through "k8s/interpreter/100-interpreter-spec.yaml". Unfortunately, it currently only supports the interpreter pod to request for CPU and memory resources. When users need to use some deep learning libraries (e.g., tensorflow), they hope that the interpreter pod can be scheduled to a machine with gpu resources.

      Attachments

        Activity

          People

            rickcheng rickcheng
            rickcheng rickcheng
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Time Tracking

                Estimated:
                Original Estimate - Not Specified
                Not Specified
                Remaining:
                Remaining Estimate - 0h
                0h
                Logged:
                Time Spent - 40m
                40m