Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-33407

Simplify the exception message from Python UDFs

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.1.0
    • 3.1.0
    • PySpark
    • None

    Description

      Currently, the exception message is as below:

      Traceback (most recent call last):
        File "<stdin>", line 1, in <module>
        File "/.../python/pyspark/sql/dataframe.py", line 427, in show
          print(self._jdf.showString(n, 20, vertical))
        File "/.../python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
        File "/.../python/pyspark/sql/utils.py", line 127, in deco
          raise_from(converted)
        File "<string>", line 3, in raise_from
      pyspark.sql.utils.PythonException:
        An exception was thrown from Python worker in the executor:
      Traceback (most recent call last):
        File "/.../python/lib/pyspark.zip/pyspark/worker.py", line 605, in main
          process()
        File "/.../python/lib/pyspark.zip/pyspark/worker.py", line 597, in process
          serializer.dump_stream(out_iter, outfile)
        File "/.../python/lib/pyspark.zip/pyspark/serializers.py", line 223, in dump_stream
          self.serializer.dump_stream(self._batched(iterator), stream)
        File "/.../python/lib/pyspark.zip/pyspark/serializers.py", line 141, in dump_stream
          for obj in iterator:
        File "/.../python/lib/pyspark.zip/pyspark/serializers.py", line 212, in _batched
          for item in iterator:
        File "/.../python/lib/pyspark.zip/pyspark/worker.py", line 450, in mapper
          result = tuple(f(*[a[o] for o in arg_offsets]) for (arg_offsets, f) in udfs)
        File "/.../python/lib/pyspark.zip/pyspark/worker.py", line 450, in <genexpr>
          result = tuple(f(*[a[o] for o in arg_offsets]) for (arg_offsets, f) in udfs)
        File "/.../python/lib/pyspark.zip/pyspark/worker.py", line 90, in <lambda>
          return lambda *a: f(*a)
        File "/.../python/lib/pyspark.zip/pyspark/util.py", line 107, in wrapper
          return f(*args, **kwargs)
        File "<stdin>", line 3, in divide_by_zero
      ZeroDivisionError: division by zero
      

      Actually, almost all cases, users only care about {{ZeroDivisionError: division by zero
      }}. We don't really have to show the internal stuff in 99% cases.

      We could just make it short, for example,

      Traceback (most recent call last):
        File "<stdin>", line 1, in <module>
        File "/.../python/pyspark/sql/dataframe.py", line 427, in show
          print(self._jdf.showString(n, 20, vertical))
        File "/.../python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
        File "/.../python/pyspark/sql/utils.py", line 127, in deco
          raise_from(converted)
        File "<string>", line 3, in raise_from
      pyspark.sql.utils.PythonException:
        An exception was thrown from Python worker in the executor:
      Traceback (most recent call last):
        File "<stdin>", line 3, in divide_by_zero
      ZeroDivisionError: division by zero
      

      Attachments

        Activity

          People

            gurwls223 Hyukjin Kwon
            gurwls223 Hyukjin Kwon
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: