Description
Currently if one is trying to query ORC tables in Hive, the plan generated by Spark hows that its using the `HiveTableScan` operator which is generic to all file formats. We could instead use the ORC data source for this so that we can get ORC specific optimizations like predicate pushdown.
Current behaviour:
```
scala> hqlContext.sql("SELECT * FROM orc_table").explain(true)
== Parsed Logical Plan ==
'Project [unresolvedalias(*, None)]
+- 'UnresolvedRelation `orc_table`, None
== Analyzed Logical Plan ==
key: string, value: string
Project key#171,value#172
+- MetastoreRelation default, orc_table, None
== Optimized Logical Plan ==
MetastoreRelation default, orc_table, None
== Physical Plan ==
HiveTableScan key#171,value#172, MetastoreRelation default, orc_table, None
```
Attachments
Issue Links
- breaks
-
SPARK-15705 Spark won't read ORC schema from metastore for partitioned tables
- Resolved
- is duplicated by
-
SPARK-12998 Enable OrcRelation when connecting via spark thrift server
- Closed
- links to