Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-7387

Guava version conflict between hadoop and spark [Spark-Branch]

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Not A Problem
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: Spark
    • Labels:
      None

      Description

      The guava conflict happens in hive driver compile stage, as in the follow exception stacktrace, conflict happens while initiate spark RDD in SparkClient, hive driver take both guava 11 from hadoop classpath and spark assembly jar which contains guava 14 classes in its classpath, spark invoked HashFunction.hasInt which method does not exists in guava 11 version, obvious the guava 11 version HashFunction is loaded into the JVM, which lead to a NoSuchMethodError during initiate spark RDD.

      java.lang.NoSuchMethodError: com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
      	at org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
      	at org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
      	at org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
      	at org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
      	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
      	at org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
      	at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
      	at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
      	at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
      	at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75)
      	at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92)
      	at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661)
      	at org.apache.spark.storage.BlockManager.put(BlockManager.scala:546)
      	at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812)
      	at org.apache.spark.broadcast.HttpBroadcast.<init>(HttpBroadcast.scala:52)
      	at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35)
      	at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29)
      	at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
      	at org.apache.spark.SparkContext.broadcast(SparkContext.scala:776)
      	at org.apache.spark.rdd.HadoopRDD.<init>(HadoopRDD.scala:112)
      	at org.apache.spark.SparkContext.hadoopRDD(SparkContext.scala:527)
      	at org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:307)
      	at org.apache.hadoop.hive.ql.exec.spark.SparkClient.createRDD(SparkClient.java:204)
      	at org.apache.hadoop.hive.ql.exec.spark.SparkClient.execute(SparkClient.java:167)
      	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:32)
      	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:159)
      	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
      	at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72)
      

      NO PRECOMMIT TESTS. This is for spark branch only.

        Attachments

        1. HIVE-7387-spark.patch
          0.7 kB
          Chao Sun

          Issue Links

            Activity

              People

              • Assignee:
                chengxiang li Chengxiang Li
                Reporter:
                chengxiang li Chengxiang Li
              • Votes:
                0 Vote for this issue
                Watchers:
                10 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: