HBase
  1. HBase
  2. HBASE-797

IllegalAccessError running RowCounter

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.19.0
    • Component/s: None
    • Labels:
      None

      Description

      Below is from Billy Pearson up on the list:

      Billy Pearson wrote:
      > I get this when I run RowCounter in the hbase jar
      >
      > java.lang.IllegalAccessError: tried to access method org.apache.hadoop.ipc.Client.incCount()V from class org.apache.hadoop.ipc.HBaseClient
      >        at org.apache.hadoop.ipc.HBaseClient.incCount(HBaseClient.java:39)
      >        at org.apache.hadoop.hbase.ipc.HbaseRPC$ClientCache.getClient(HbaseRPC.java:179)
      >        at org.apache.hadoop.hbase.ipc.HbaseRPC$ClientCache.access$200(HbaseRPC.java:156)
      >        at org.apache.hadoop.hbase.ipc.HbaseRPC$Invoker.<init>(HbaseRPC.java:224)
      >        at org.apache.hadoop.hbase.ipc.HbaseRPC.getProxy(HbaseRPC.java:336)
      >        at org.apache.hadoop.hbase.ipc.HbaseRPC.getProxy(HbaseRPC.java:327)
      >        at org.apache.hadoop.hbase.ipc.HbaseRPC.getProxy(HbaseRPC.java:364)
      >        at org.apache.hadoop.hbase.ipc.HbaseRPC.waitForProxy(HbaseRPC.java:302)
      >        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getHRegionConnection(HConnectionManager.java:764)
      >        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:815)
      >        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:457)
      >        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:431)
      >        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:510)
      >        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:467)
      >        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:431)
      >        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:510)
      >        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:471)
      >        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:431)
      >        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:125)
      >        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:110)
      >        at org.apache.hadoop.hbase.mapred.TableInputFormat.configure(TableInputFormat.java:60)
      >        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:58)
      >        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:82)
      >        at org.apache.hadoop.mapred.JobConf.getInputFormat(JobConf.java:400)
      >        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:705)
      >        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:973)
      >        at com.compspy.mapred.RowCounter.run(RowCounter.java:111)
      >        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
      >        at com.compspy.mapred.RowCounter.main(RowCounter.java:126)
      >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      >        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
      >        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      >        at java.lang.reflect.Method.invoke(Method.java:597)
      >        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
      >        at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
      >        at com.compspy.mapred.Driver.main(Driver.java:24)
      >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      >        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
      >        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      >        at java.lang.reflect.Method.invoke(Method.java:597)
      >        at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
      >        at org.apache.hadoop.mapred.JobShell.run(JobShell.java:194)
      >        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
      >        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
      >        at org.apache.hadoop.mapred.JobShell.main(JobShell.java:220)
      
      

      Sebastien Rainville just had a related issue. J-D investigating found a workaround. Adding hbase.jar to $HADOOP_HOME/conf/hadoop-env.sh#HADOOP_CLASSPATH

        Activity

        Hide
        Andrew Purtell added a comment -

        We probably did not hit this because we symlink the hbase jar into hadoop/lib/.

        Show
        Andrew Purtell added a comment - We probably did not hit this because we symlink the hbase jar into hadoop/lib/.
        Hide
        stack added a comment -

        Take another look. Now we have metrics (or will soon), this might be fixed.

        Show
        stack added a comment - Take another look. Now we have metrics (or will soon), this might be fixed.
        Hide
        stack added a comment -

        Perverse patch that tinkers with accessibility using reflection. Though the two classes are in same package, somehow default access is not letting me get at default access members in super class Client. I'd guess its that they are loaded by two different classloaders. Patch as is does not work in local mode. It does when running MR. Digging.

        Show
        stack added a comment - Perverse patch that tinkers with accessibility using reflection. Though the two classes are in same package, somehow default access is not letting me get at default access members in super class Client. I'd guess its that they are loaded by two different classloaders. Patch as is does not work in local mode. It does when running MR. Digging.
        Hide
        stack added a comment -

        The proper fix for this is changing access up in hadoop from private to protected; otherwise, its a hack. Moving out of 0.19.

        Show
        stack added a comment - The proper fix for this is changing access up in hadoop from private to protected; otherwise, its a hack. Moving out of 0.19.
        Hide
        stack added a comment -

        I think this fixed by hbase-900 part 1. Bringing into 0.19.0 to test.

        Show
        stack added a comment - I think this fixed by hbase-900 part 1. Bringing into 0.19.0 to test.
        Hide
        stack added a comment -

        This works now we have our own RPC.

        $ ./bin/hadoop jar /home/stack/trunk/build/hbase-0.19.0-dev.jar rowcounter /home/stack/xx 'TestTable2' info:server
        08/12/17 05:09:39 DEBUG client.HConnectionManager$TableServers: Found ROOT REGION => {NAME => '-ROOT-,,0', STARTKEY => '', ENDKEY => '', ENCODED => 70236052, TABLE => {{NAME => '-ROOT-', IS_ROOT => 'true', IS_META => 'true', FAMILIES => [{NAME => 'info', BLOOMF}
        08/12/17 05:09:39 DEBUG client.HConnectionManager$TableServers: Cache hit in table locations for row <> and tableName TestTable2: location server 208.76.44.140:60020, location region name TestTable2,,1229490444429
        08/12/17 05:09:39 INFO mapred.TableInputFormatBase: split: 0->aa0-000-13.u.powerset.com:,
        08/12/17 05:09:39 INFO mapred.JobClient: Running job: job_200812170506_0005
        08/12/17 05:09:40 INFO mapred.JobClient:  map 0% reduce 0%
        08/12/17 05:09:45 INFO mapred.JobClient:  map 100% reduce 0%
        
        08/12/17 05:09:58 INFO mapred.JobClient:  map 100% reduce 100%
        08/12/17 05:09:59 INFO mapred.JobClient: Job complete: job_200812170506_0005
        08/12/17 05:09:59 INFO mapred.JobClient: Counters: 16
        08/12/17 05:09:59 INFO mapred.JobClient:   File Systems
        08/12/17 05:09:59 INFO mapred.JobClient:     HDFS bytes written=23
        08/12/17 05:09:59 INFO mapred.JobClient:     Local bytes read=23
        08/12/17 05:09:59 INFO mapred.JobClient:     Local bytes written=74
        08/12/17 05:09:59 INFO mapred.JobClient:   Job Counters 
        08/12/17 05:09:59 INFO mapred.JobClient:     Launched reduce tasks=1
        08/12/17 05:09:59 INFO mapred.JobClient:     Launched map tasks=1
        08/12/17 05:09:59 INFO mapred.JobClient:     Data-local map tasks=1
        08/12/17 05:09:59 INFO mapred.JobClient:   RowCounter
        08/12/17 05:09:59 INFO mapred.JobClient:     Rows=1
        08/12/17 05:09:59 INFO mapred.JobClient:   Map-Reduce Framework
        08/12/17 05:09:59 INFO mapred.JobClient:     Reduce input groups=1
        08/12/17 05:09:59 INFO mapred.JobClient:     Combine output records=0
        08/12/17 05:09:59 INFO mapred.JobClient:     Map input records=1
        08/12/17 05:09:59 INFO mapred.JobClient:     Reduce output records=1
        08/12/17 05:09:59 INFO mapred.JobClient:     Map output bytes=15
        08/12/17 05:09:59 INFO mapred.JobClient:     Map input bytes=0
        08/12/17 05:09:59 INFO mapred.JobClient:     Combine input records=0
        08/12/17 05:09:59 INFO mapred.JobClient:     Map output records=1
        08/12/17 05:09:59 INFO mapred.JobClient:     Reduce input records=1
        

        See the RowCounter counter above whose value is 1.

        You still need to add the hbase conf somehow into them mix so jobs like RowCounter can find the hbase instance. Can do this by adding the hbase config to the hadoop-site.xml or by adding hbase conf dir to HADOOP_CLASSPATH in hadoop-env.sh.

        Updated the mapreduce package doc accordingly.

        Show
        stack added a comment - This works now we have our own RPC. $ ./bin/hadoop jar /home/stack/trunk/build/hbase-0.19.0-dev.jar rowcounter /home/stack/xx 'TestTable2' info:server 08/12/17 05:09:39 DEBUG client.HConnectionManager$TableServers: Found ROOT REGION => {NAME => '-ROOT-,,0', STARTKEY => '', ENDKEY => '', ENCODED => 70236052, TABLE => {{NAME => '-ROOT-', IS_ROOT => ' true ', IS_META => ' true ', FAMILIES => [{NAME => 'info', BLOOMF} 08/12/17 05:09:39 DEBUG client.HConnectionManager$TableServers: Cache hit in table locations for row <> and tableName TestTable2: location server 208.76.44.140:60020, location region name TestTable2,,1229490444429 08/12/17 05:09:39 INFO mapred.TableInputFormatBase: split: 0->aa0-000-13.u.powerset.com:, 08/12/17 05:09:39 INFO mapred.JobClient: Running job: job_200812170506_0005 08/12/17 05:09:40 INFO mapred.JobClient: map 0% reduce 0% 08/12/17 05:09:45 INFO mapred.JobClient: map 100% reduce 0% 08/12/17 05:09:58 INFO mapred.JobClient: map 100% reduce 100% 08/12/17 05:09:59 INFO mapred.JobClient: Job complete: job_200812170506_0005 08/12/17 05:09:59 INFO mapred.JobClient: Counters: 16 08/12/17 05:09:59 INFO mapred.JobClient: File Systems 08/12/17 05:09:59 INFO mapred.JobClient: HDFS bytes written=23 08/12/17 05:09:59 INFO mapred.JobClient: Local bytes read=23 08/12/17 05:09:59 INFO mapred.JobClient: Local bytes written=74 08/12/17 05:09:59 INFO mapred.JobClient: Job Counters 08/12/17 05:09:59 INFO mapred.JobClient: Launched reduce tasks=1 08/12/17 05:09:59 INFO mapred.JobClient: Launched map tasks=1 08/12/17 05:09:59 INFO mapred.JobClient: Data-local map tasks=1 08/12/17 05:09:59 INFO mapred.JobClient: RowCounter 08/12/17 05:09:59 INFO mapred.JobClient: Rows=1 08/12/17 05:09:59 INFO mapred.JobClient: Map-Reduce Framework 08/12/17 05:09:59 INFO mapred.JobClient: Reduce input groups=1 08/12/17 05:09:59 INFO mapred.JobClient: Combine output records=0 08/12/17 05:09:59 INFO mapred.JobClient: Map input records=1 08/12/17 05:09:59 INFO mapred.JobClient: Reduce output records=1 08/12/17 05:09:59 INFO mapred.JobClient: Map output bytes=15 08/12/17 05:09:59 INFO mapred.JobClient: Map input bytes=0 08/12/17 05:09:59 INFO mapred.JobClient: Combine input records=0 08/12/17 05:09:59 INFO mapred.JobClient: Map output records=1 08/12/17 05:09:59 INFO mapred.JobClient: Reduce input records=1 See the RowCounter counter above whose value is 1. You still need to add the hbase conf somehow into them mix so jobs like RowCounter can find the hbase instance. Can do this by adding the hbase config to the hadoop-site.xml or by adding hbase conf dir to HADOOP_CLASSPATH in hadoop-env.sh. Updated the mapreduce package doc accordingly.

          People

          • Assignee:
            stack
            Reporter:
            stack
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development