Description
Right now if there is an exception inside of a PySpark worker, it causes the worker process to exit prematurely, triggering an EOF exception at the JVM worker. This means you have to go dig through worker logs to find the exception trace.
It would be more helpful if the Python worker instead caught the exception and passed the string representation of the exception to the JVM worker, which could then wrap it in a Java exception. e.g.
throw new PythonException(exnString).
This would make it much easier to debug python tasks, since that string would show up at the driver.