Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.6.2, 1.6.3
-
Ubuntu 14.04
Spark: Local Mode
Description
When doing a left-join between two tables, say A and B, Catalyst has information about the projection required for table B. Only the required columns should be scanned.
Code snippet below explains the scenario:
scala> val dfA = sqlContext.read.parquet("/home/mohit/ruleA")
dfA: org.apache.spark.sql.DataFrame = [aid: int, aVal: string]
scala> val dfB = sqlContext.read.parquet("/home/mohit/ruleB")
dfB: org.apache.spark.sql.DataFrame = [bid: int, bVal: string]
scala> dfA.registerTempTable("A")
scala> dfB.registerTempTable("B")
scala> sqlContext.sql("select A.aid, B.bid from A left join B on A.aid=B.bid where B.bid<2").explain
== Physical Plan ==
Project aid#15,bid#17
+- Filter (bid#17 < 2)
+- BroadcastHashOuterJoin aid#15, bid#17, LeftOuter, None
:- Scan ParquetRelationaid#15,aVal#16 InputPaths: file:/home/mohit/ruleA
+- Scan ParquetRelationbid#17,bVal#18 InputPaths: file:/home/mohit/ruleB
This is a watered-down example from a production issue which has a huge performance impact.
External reference: http://stackoverflow.com/questions/40783675/spark-sql-catalyst-is-scanning-undesired-columns