Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Cannot Reproduce
-
1.5.2, 1.6.0
-
None
-
None
-
AWS EMR 4.2, OSX
Description
I have partitioned Parquet that I am trying to query with Spark SQL. When I involve a partition column in the WHERE clause when using OR I get an exception.
I have had this issue when using spark-submit on a cluster when the Parquet was created externally and registered with Hive JDBC-backed metastore externally. I can also duplicate this behavior with a simplified example in the spark shell. I will include the simplified example. Note that I am using my hive-site.xml when I launch the spark-shell so the metastore is set up the same way.
I also tried this locally with the same results on a Mac laptop with 1.6.0.
Create some partitioned parquet:
case class Hit(meta_ts_unix_ms: Long, username: String, srclatitude: Double, srclongitude: Double, srccity: String, srcregion: String, srccountrycode: String, metaclass: String) val rdd = sc.parallelize(Array(Hit(34L, "user1", 45.2, 23.2, "city1", "state1", "US", "blah, other"), Hit(35L, "user1", 53.2, 11.2, "city2", "state2", "US", "blah"))) sqlContext.createDataFrame(rdd).registerTempTable("test_table") sqlContext.sql("select * from test_table where meta_ts_unix_ms = 35").write.parquet("file:///tmp/year=2015/month=12/day=4/hour=1/") sqlContext.sql("select * from test_table where meta_ts_unix_ms = 34").write.parquet("file:///tmp/year=2015/month=12/day=3/hour=23/")
Create an external table from the parquet:
sqlContext.createExternalTable("test_table2", "file:///tmp/year=2015/", "parquet")
If I understand correctly the partitions were discovered automatically because they show up in the describe command even though they were not part of the schema generated from the case classes:
+---------------+---------+-------+ | col_name|data_type|comment| +---------------+---------+-------+ |meta_ts_unix_ms| bigint| | | username| string| | | srclatitude| double| | | srclongitude| double| | | srccity| string| | | srcregion| string| | | srccountrycode| string| | | metaclass| string| | | year| int| | | month| int| | | day| int| | | hour| int| | +---------------+---------+-------+
This query:
sqlContext.sql("SELECT meta_ts_unix_ms,username,srclatitude,srclongitude,srccity,srcregion,srccountrycode FROM test_table2 WHERE meta_ts_unix_ms IS NOT NULL AND username IS NOT NULL AND metaclass like '%blah%' OR hour = 1").show()
Throws this exception:
16/01/20 21:36:46 WARN TaskSetManager: Lost task 0.0 in stage 13.0 (TID 84, ip-192-168-111-222.ec2.internal): org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree: metaclass#53 at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:49) at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:86) at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:85) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:227) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:227) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:226) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:232) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:232) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:249) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) at scala.collection.AbstractIterator.to(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:279) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:232) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:232) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:232) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:249) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) at scala.collection.AbstractIterator.to(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:279) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:232) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:232) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:232) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:249) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) at scala.collection.AbstractIterator.to(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:279) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:232) at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:217) at org.apache.spark.sql.catalyst.expressions.BindReferences$.bindReference(BoundAttribute.scala:85) at org.apache.spark.sql.catalyst.expressions.InterpretedPredicate$.create(predicates.scala:31) at org.apache.spark.sql.execution.SparkPlan.newPredicate(SparkPlan.scala:281) at org.apache.spark.sql.execution.Filter$$anonfun$4.apply(basicOperators.scala:114) at org.apache.spark.sql.execution.Filter$$anonfun$4.apply(basicOperators.scala:113) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300) at org.apache.spark.rdd.RDD.iterator(RDD.scala:264) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300) at org.apache.spark.rdd.RDD.iterator(RDD.scala:264) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: Couldn't find metaclass#53 in [meta_ts_unix_ms#45L,username#46,srclatitude#47,srclongitude#48,srccity#49,srcregion#50,srccountrycode#51] at scala.sys.package$.error(package.scala:27) at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:92) at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:86) at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48) ... 77 more
This query works fine and returns expected results but it does not involve any of the partition columns in the OR portion of the WHERE clause:
sqlContext.sql("SELECT meta_ts_unix_ms,username,srclatitude,srclongitude,srccity,srcregion,srccountrycode FROM test_table2 WHERE meta_ts_unix_ms IS NOT NULL AND username IS NOT NULL AND metaclass like '%other%' OR metaclass = 'blah'").show()
Attachments
Attachments
Issue Links
- is related to
-
SPARK-26041 catalyst cuts out some columns from dataframes: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute
- Resolved