Description
test("SPARK-28344: fail ambiguous self join - Dataset.colRegex as column ref") { val df1 = spark.range(3) val df2 = df1.filter($"id" > 0) withSQLConf( SQLConf.FAIL_AMBIGUOUS_SELF_JOIN_ENABLED.key -> "true", SQLConf.CROSS_JOINS_ENABLED.key -> "true") { assertAmbiguousSelfJoin(df1.join(df2, df1.colRegex("id") > df2.colRegex("id"))) } }
For this unit test, if we append `.toDF()` to both df1 and df2, the query won't fail.
Attachments
Attachments
Issue Links
- relates to
-
SPARK-33536 Incorrect join results when joining twice with the same DF
- Resolved
- links to