Affects Version/s: 2.3.1, 2.3.2, 2.4.0
Fix Version/s: None
Tested both on Linux and Windows, on my computer and on Databricks.
Spark version: 2.3.1
Python version: 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 17:00:18)
I tried different releases of Spark too (2.4.0, 2.3.2), the behaviour persists.
The physical plan proposed by .explain() method shows an inefficient way to call Python UDFs in PySpark.
This behaviour take place under these circustances:
- PySpark API
- At least one operation in the DAG that uses the result of the Python UDF
My expectation is that the optimizer should call once the Python UDF with BatchEvalPython and then reuse the result across following steps.
The optimizer prefers to call n times the same UDF, with the same parameters within the same BatchEvalPython, and only uses one of the result columns (PythonUDF2#16) while discarding the others.
I believe that could lead to poor performances due to the large data exchange with Python processes and due to the additional calls.
Full code on Gist: https://gist.github.com/andrearota/f77b6a293421a3f26dd5d2fb0a04046e