Contribute PoC code to Hive on Spark as the ground work for subsequent tasks. While it has hacks and bad organized code, it will change and more importantly it allows multiple people to working on different components concurrently.
With this, simple queries such as "select col from tab where ..." and "select grp, avg(val) from tab group by grp where ..." can be executed on Spark.
Contents of the patch:
1. code path for additional execution engine
2. essential classes such as SparkWork, SparkTask, SparkCompiler, HiveMapFunction, HiveReduceFunction, SparkClient, etc.
3. Some code changes to existing classes.
4. build infrastructure
5. utility classes.
To try run Hive on Spark, for now you need to have:
1. self-built Spark 1.0.0 with the patch attached.
2. invoke Hive client with environment variable MASTER, which points to master URL of Spark.
2. set hive.execution.engine=spark
3. execute supported queries.
NO PRECOMMIT TESTS. This is for spark branch only.