Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-7467

When querying HBase table, task fails with exception: java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Later
    • None
    • None
    • Spark
    • None
    • Spark-1.0.0, HBase-0.98.2

    Description

      When I run select count( * ) on an HBase table, spark task fails with:

      java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
      at org.apache.hadoop.hbase.protobuf.RequestConverter.buildRegionSpecifier(RequestConverter.java:910)
      at org.apache.hadoop.hbase.protobuf.RequestConverter.buildGetRowOrBeforeRequest(RequestConverter.java:131)
      at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1403)
      at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1181)
      at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1059)
      at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1016)
      at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:326)
      at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:192)
      at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:165)
      at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getRecordReader(HiveHBaseTableInputFormat.java:93)
      at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:241)
      at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:193)
      at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:184)
      at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:93)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
      at org.apache.spark.scheduler.Task.run(Task.scala:51)

      NO PRECOMMIT TESTS. This is for spark branch only.

      Attachments

        Issue Links

          Activity

            People

              jxiang Jimmy Xiang
              lirui Rui Li
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: