Uploaded image for project: 'Zeppelin'
  1. Zeppelin
  2. ZEPPELIN-3205

restarting interpreter setting in a notebook abort running jobs of other notebooks

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 0.7.3
    • None
    • None
    • None

    Description

      I'm aware that there is resolved issues

      https://issues.apache.org/jira/browse/ZEPPELIN-1770 

      But it's pretty simple to reproduce, I can configure spark or python interpreters in per-note isolated mode, and start a long running job in 2 notebooks #1 and #2. If I restart spark or python (depends on the type of running job) interpreter in notebook #1, the job in notebook #2 is aborted. It is worse for pyspark since not just the job is aborted, the pyspark python process of notebook #2 is also killed, but notebook #2 will be hanging afterward, the only way to fix is to restart notebook #2

      I also found a related issue for python interpreter

      https://issues.apache.org/jira/browse/ZEPPELIN-3171

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              ntdunglc dungnguyen
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated: