Currently, Spark client ships all Hive JARs, including those that Hive depends on, to Spark cluster when a query is executed by Spark. This is not efficient, causing potential library conflicts. Ideally, only a minimum set of JARs needs to be shipped. This task is to identify such a set.
We should learn from current MR cluster, for which I assume only hive-exec JAR is shipped to MR cluster.
We also need to ensure that user-supplied JARs are also shipped to Spark cluster, in a similar fashion as MR does.
NO PRECOMMIT TESTS. This is for spark-branch only.