Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
2.0.1
-
None
Description
Example code:
val df = spark.sparkContext.parallelize(List(("A", 10, "dss@s1"), ("A", 20, "dss@s2"),
("B", 1, "dss@qqa"), ("B", 2, "dss@qqb"))).toDF("Group", "Amount", "Email")
df.as("a").join(df.as("b"))
.where($"a.Group" === $"b.Group")
.explain()
I always get the SortMerge strategy (never shuffle hash join) even if i set spark.sql.join.preferSortMergeJoin to false since:
sinzeInBytes = 2^63-1
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala#L101
and thus:
condition here: https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala#L127
is always false...
I think this shouldnt be the case my df has a specifc size and number of partitions (200 which is btw far from optimal)...