Using `quit()` in the new ipython interpreter backend leads to ipython backend .. exiting, and a new paragraph run gets stuck in 'RUNNING' indefinitely, or at least until pySpark interpreter is restarted.
- Ignore `quit()` calls
- More importantly - capture when IPython backend process dies (for this or any other reason) so Spark interpreter would know it has to start a new session, and so it would also not show misleading 'RUNNING' state indefinitely on the front-end to users.
First issue might be easy to fix using something like `def quit(): pass` as soon as ipython process starts.
But again more importantly here to have some generic logic to capture and recognize events when ipython process exits or dies for some reason and pass this information up to Spark interpreter.