Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-17959

spark.sql.join.preferSortMergeJoin has no effect for simple join due to calculated size of LogicalRdd

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Incomplete
    • 2.0.1
    • None
    • SQL

    Description

      Example code:
      val df = spark.sparkContext.parallelize(List(("A", 10, "dss@s1"), ("A", 20, "dss@s2"),
      ("B", 1, "dss@qqa"), ("B", 2, "dss@qqb"))).toDF("Group", "Amount", "Email")

      df.as("a").join(df.as("b"))
      .where($"a.Group" === $"b.Group")
      .explain()

      I always get the SortMerge strategy (never shuffle hash join) even if i set spark.sql.join.preferSortMergeJoin to false since:
      sinzeInBytes = 2^63-1
      https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala#L101
      and thus:
      condition here: https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala#L127
      is always false...
      I think this shouldnt be the case my df has a specifc size and number of partitions (200 which is btw far from optimal)...

      Attachments

        Activity

          People

            Unassigned Unassigned
            skonto Stavros Kontopoulos
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: