Description
Prior to Spark 2.0 it was possible to use Python UDF output as a predicate:
from pyspark.sql.functions import udf from pyspark.sql.types import BooleanType df1 = sc.parallelize([(1, ), (2, )]).toDF(["col_a"]) df2 = sc.parallelize([(2, ), (3, )]).toDF(["col_b"]) pred = udf(lambda x, y: x == y, BooleanType()) df1.join(df2).where(pred("col_a", "col_b")).show()
In Spark 2.0 this is no longer possible:
spark.conf.set("spark.sql.crossJoin.enabled", True) df1.join(df2).where(pred("col_a", "col_b")).show() ## ... ## Py4JJavaError: An error occurred while calling o731.showString. : java.lang.RuntimeException: Invalid PythonUDF <lambda>(col_a#132L, col_b#135L), requires attributes from more than one child. ## ...
Attachments
Issue Links
- duplicates
-
SPARK-18589 persist() resolves "java.lang.RuntimeException: Invalid PythonUDF <lambda>(...), requires attributes from more than one child"
- Resolved