Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-8682

Range Join for Spark SQL

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Incomplete
    • None
    • None
    • SQL

    Description

      Currently Spark SQL uses a Broadcast Nested Loop join (or a filtered Cartesian Join) when it has to execute the following range query:

      SELECT A.*,
             B.*
      FROM   tableA A
             JOIN tableB B
              ON A.start <= B.end
               AND A.end > B.start
      

      This is horribly inefficient. The performance of this query can be greatly improved, when one of the tables can be broadcasted, by creating a range index. A range index is basically a sorted map containing the rows of the smaller table, indexed by both the high and low keys. using this structure the complexity of the query would go from O(N * M) to O(N * 2 * LOG(M)), N = number of records in the larger table, M = number of records in the smaller (indexed) table.

      I have created a pull request for this. According to the Spark SQL: Relational Data Processing in Spark paper similar work (page 11, section 7.2) has already been done by the ADAM project (cannot locate the code though).

      Any comments and/or feedback are greatly appreciated.

      Attachments

        1. perf_testing.scala
          2 kB
          Herman van Hövell

        Activity

          People

            Unassigned Unassigned
            hvanhovell Herman van Hövell
            Michael Armbrust Michael Armbrust
            Votes:
            11 Vote for this issue
            Watchers:
            26 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: