Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-18642

Spark SQL: Catalyst is scanning undesired columns

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 1.6.2, 1.6.3
    • Fix Version/s: 2.0.0
    • Component/s: SQL
    • Labels:
    • Environment:

      Ubuntu 14.04
      Spark: Local Mode

      Description

      When doing a left-join between two tables, say A and B, Catalyst has information about the projection required for table B. Only the required columns should be scanned.

      Code snippet below explains the scenario:

      scala> val dfA = sqlContext.read.parquet("/home/mohit/ruleA")
      dfA: org.apache.spark.sql.DataFrame = [aid: int, aVal: string]

      scala> val dfB = sqlContext.read.parquet("/home/mohit/ruleB")
      dfB: org.apache.spark.sql.DataFrame = [bid: int, bVal: string]

      scala> dfA.registerTempTable("A")
      scala> dfB.registerTempTable("B")

      scala> sqlContext.sql("select A.aid, B.bid from A left join B on A.aid=B.bid where B.bid<2").explain

      == Physical Plan ==
      Project aid#15,bid#17
      +- Filter (bid#17 < 2)
      +- BroadcastHashOuterJoin aid#15, bid#17, LeftOuter, None
      :- Scan ParquetRelationaid#15,aVal#16 InputPaths: file:/home/mohit/ruleA
      +- Scan ParquetRelationbid#17,bVal#18 InputPaths: file:/home/mohit/ruleB

      This is a watered-down example from a production issue which has a huge performance impact.
      External reference: http://stackoverflow.com/questions/40783675/spark-sql-catalyst-is-scanning-undesired-columns

        Attachments

          Activity

            People

            • Assignee:
              dongjoon Dongjoon Hyun
              Reporter:
              mohitgargk Mohit
            • Votes:
              1 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: