HBase
  1. HBase
  2. HBASE-10304

Running an hbase job jar: IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Blocker Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.98.0, 0.96.1.1
    • Fix Version/s: 0.98.0, 0.99.0
    • Component/s: documentation, mapreduce
    • Labels:
      None
    • Release Note:
      My local site run generates the docbook correctly, looks good to me on both branches. I've committed this to 0.98 and trunk.

      Description

      (Jimmy has been working on this one internally. I'm just the messenger raising this critical issue upstream).

      So, if you make job jar and bundle up hbase inside in it because you want to access hbase from your mapreduce task, the deploy of the job jar to the cluster fails with:

      14/01/05 08:59:19 INFO Configuration.deprecation: topology.node.switch.mapping.impl is deprecated. Instead, use net.topology.node.switch.mapping.impl
      14/01/05 08:59:19 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
      Exception in thread "main" java.lang.IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString
      	at java.lang.ClassLoader.defineClass1(Native Method)
      	at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
      	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
      	at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
      	at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
      	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
      	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
      	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
      	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
      	at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:818)
      	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433)
      	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186)
      	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147)
      	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270)
      	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100)
      	at com.ngdata.hbaseindexer.mr.HBaseMapReduceIndexerTool.run(HBaseMapReduceIndexerTool.java:124)
      	at com.ngdata.hbaseindexer.mr.HBaseMapReduceIndexerTool.run(HBaseMapReduceIndexerTool.java:64)
      	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
      	at com.ngdata.hbaseindexer.mr.HBaseMapReduceIndexerTool.main(HBaseMapReduceIndexerTool.java:51)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:606)
      	at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
      

      So, ZCLBS is a hack. This class is in the hbase-protocol module. It is "in" the com.google.protobuf package. All is well and good usually.

      But when we make a job jar and bundle up hbase inside it, our 'trick' breaks. RunJar makes a new class loader to run the job jar. This URLCLassLoader 'attaches' all the jars and classes that are in jobjar so they can be found when it does to do a lookup only Classloaders work by always delegating to their parent first (unless you are a WAR file in a container where delegation is 'off' for the most part) and in this case, the parent classloader will have access to a pb jar since pb is in the hadoop CLASSPATH. So, the parent loads the pb classes.

      We then load ZCLBS only this is done in the claslsloader made by RunJar; ZKCLBS has a different classloader from its superclass and we get the above IllegalAccessError.

      Now (Jimmy's work comes in here), this can't be fixed by reflection – you can't setAccess on a 'Class' – and though it probably could be fixed by hacking RunJar so it was somehow made configurable so we could put in place our own ClassLoader to do something like containers do for WAR files (probably not a bad idea), there would be some fierce hackery involved and besides, this won't show up in hadoop anytime too soon leaving hadoop 2.2ers out in the cold.

      So, the alternatives are:

      1. Undo the ZCLSB hack. We'd lose a lot of nice perf improvement but I'd say this is preferable to crazy CLASSPATH hacks.
      2. Require folks put hbase-protocol – thats all you'd need – on the hadoop CLASSPATH. This is kinda crazy.
      3. We could try shading the pb jar content or probably better, just pull pb into hbase altogether only under a different package. If it was in our code base, we could do more ZCLSB-like speedups.

      I was going to experiment with #3 above unless anyone else has a better idea.

      1. jobjar.xml
        3 kB
        stack
      2. hbase-10304_not_tested.patch
        0.8 kB
        Enis Soztutar
      3. HBASE-10304.docbook.patch
        4 kB
        Nick Dimiduk

        Issue Links

          Activity

          Hide
          Andrew Purtell added a comment -

          pull pb into hbase altogether only under a different package. If it was in our code base, we could do more ZCLSB-like speedups

          Of the alternatives above, this is the least bad option I think.

          Show
          Andrew Purtell added a comment - pull pb into hbase altogether only under a different package. If it was in our code base, we could do more ZCLSB-like speedups Of the alternatives above, this is the least bad option I think.
          Hide
          Nick Dimiduk added a comment -

          For an immediate solution, my preference is the order in which you present the options. (1) solves the blocking problem immediately and gives us time to make an informed decision about (3). I find (2) an acceptable alternative because it can be managed via BigTop packaging and is effectively transparent to users of any distribution. Tarball users will need a big fat readme warning (though, as Mr. Purtell pointed out to me earlier today on an unrelated ticket, users sophisticated enough to rock the tarballs in prod are also very likely making use of infra automation à la Puppet or Chef, so this isn't a big deal for them either). (3) amounts to forking PB, something I don't think we should do lightly. Even if we tackle is as a packaging step via maven:assembly, it'll lead to version conflict problems down the road for users who want to use PB in their own application code (unless you also want to get into jarjar territory...).

          Hence, let's fix things for users today via (1) or (2) so we can continue conversation on responsible execution of (3).

          Show
          Nick Dimiduk added a comment - For an immediate solution, my preference is the order in which you present the options. (1) solves the blocking problem immediately and gives us time to make an informed decision about (3). I find (2) an acceptable alternative because it can be managed via BigTop packaging and is effectively transparent to users of any distribution. Tarball users will need a big fat readme warning (though, as Mr. Purtell pointed out to me earlier today on an unrelated ticket, users sophisticated enough to rock the tarballs in prod are also very likely making use of infra automation à la Puppet or Chef, so this isn't a big deal for them either). (3) amounts to forking PB, something I don't think we should do lightly. Even if we tackle is as a packaging step via maven:assembly, it'll lead to version conflict problems down the road for users who want to use PB in their own application code (unless you also want to get into jarjar territory...). Hence, let's fix things for users today via (1) or (2) so we can continue conversation on responsible execution of (3).
          Hide
          Andrew Purtell added a comment -

          Even if we tackle is as a packaging step via maven:assembly, it'll lead to version conflict problems down the road

          I was wondering if this might unblock the issue without handing back performance gains while we work on something better.

          Show
          Andrew Purtell added a comment - Even if we tackle is as a packaging step via maven:assembly, it'll lead to version conflict problems down the road I was wondering if this might unblock the issue without handing back performance gains while we work on something better.
          Hide
          Patrick Hunt added a comment -

          I realize that this is longer term (and perhaps you're already doing/did and I just missed it?) but what about doing 1, but get the fix you're trying to hack into hbase codebase into upstream, protobuf itself, instead. Isn't this a change that would benefit the entire protobuf community?

          Show
          Patrick Hunt added a comment - I realize that this is longer term (and perhaps you're already doing/did and I just missed it?) but what about doing 1, but get the fix you're trying to hack into hbase codebase into upstream, protobuf itself, instead. Isn't this a change that would benefit the entire protobuf community?
          Hide
          stack added a comment -

          Isn't this a change that would benefit the entire protobuf community?

          it is a change that goes against the pb lib philosophy of making a copy before going to work on it. The pb team also talks of 'copy is cheap' and 'object creation is cheap' in java up on discussion lists (which is 'true' but no copy and no creation will always be better) so it might take a while and some work getting it contributed.

          If we were to go this route, we'd want to push more than just this one ZCLBS change or just go the route of this gentleman https://code.google.com/p/protobuf-gcless/ altogether.

          Let me measure what we lose reverting (option 1.). ZCLBS came in as part of the effort at getting us back to 0.94 numbers.

          Interesting that folks here think 2. is viable; I'd think we'd just be pissing folks off... but it'd be easy to require.

          Will get some more on 3. too.

          Will be back.

          Show
          stack added a comment - Isn't this a change that would benefit the entire protobuf community? it is a change that goes against the pb lib philosophy of making a copy before going to work on it. The pb team also talks of 'copy is cheap' and 'object creation is cheap' in java up on discussion lists (which is 'true' but no copy and no creation will always be better) so it might take a while and some work getting it contributed. If we were to go this route, we'd want to push more than just this one ZCLBS change or just go the route of this gentleman https://code.google.com/p/protobuf-gcless/ altogether. Let me measure what we lose reverting (option 1.). ZCLBS came in as part of the effort at getting us back to 0.94 numbers. Interesting that folks here think 2. is viable; I'd think we'd just be pissing folks off... but it'd be easy to require. Will get some more on 3. too. Will be back.
          Hide
          stack added a comment -

          I used this testing. It is an assembly that creates an hbase jobjar to run on cluster. Good for repo'ing this issue. Also, our little MR Driver program is broke since we modularized hbase. I can fix or just purge it since it does not look like anyone uses it.

          Show
          stack added a comment - I used this testing. It is an assembly that creates an hbase jobjar to run on cluster. Good for repo'ing this issue. Also, our little MR Driver program is broke since we modularized hbase. I can fix or just purge it since it does not look like anyone uses it.
          Hide
          Patrick Hunt added a comment -

          it is a change that goes against the pb lib philosophy of making a copy before going to work on it

          perhaps then the upstream change should be to make that class (LiteralByteString) protected rather than default access? iiuc that would allow you to provide your own impl properly.

          Show
          Patrick Hunt added a comment - it is a change that goes against the pb lib philosophy of making a copy before going to work on it perhaps then the upstream change should be to make that class (LiteralByteString) protected rather than default access? iiuc that would allow you to provide your own impl properly.
          Hide
          stack added a comment -

          perhaps then the upstream change should be to make that class (LiteralByteString) protected rather than default access? iiuc that would allow you to provide your own impl properly.

          This would imply violation of the deduced 'copy first' axiom?

          Reading in pb, public classes are final.

          Show
          stack added a comment - perhaps then the upstream change should be to make that class (LiteralByteString) protected rather than default access? iiuc that would allow you to provide your own impl properly. This would imply violation of the deduced 'copy first' axiom? Reading in pb, public classes are final.
          Hide
          Devaraj Das added a comment -

          I'd say we do (1) immediately and then do the hackery in Runjar (similar to what WAR does). I am not in favor of (3) - seems to be a big undertaking.

          Show
          Devaraj Das added a comment - I'd say we do (1) immediately and then do the hackery in Runjar (similar to what WAR does). I am not in favor of (3) - seems to be a big undertaking.
          Hide
          Enis Soztutar added a comment -

          Looking at ZCLBS class, all three methods we need are indeed static. Can we get away with subclassing LiteralByteString with a simple patch or am I reading this wrong?

          Show
          Enis Soztutar added a comment - Looking at ZCLBS class, all three methods we need are indeed static. Can we get away with subclassing LiteralByteString with a simple patch or am I reading this wrong?
          Hide
          Enis Soztutar added a comment -

          Attaching a patch per above. Not tested, but seems to compile.

          Stack, can you easily repro the problem?

          Show
          Enis Soztutar added a comment - Attaching a patch per above. Not tested, but seems to compile. Stack, can you easily repro the problem?
          Hide
          Enis Soztutar added a comment -

          let's try qa.

          Show
          Enis Soztutar added a comment - let's try qa.
          Hide
          stack added a comment -

          Enis Soztutar no luck. Slightly different version of same issue – but points for good attempt for sure:

          Exception in thread "main" java.lang.IllegalAccessError: tried to access class com.google.protobuf.LiteralByteString from class com.google.protobuf.ZeroCopyLiteralByteString
          	at com.google.protobuf.ZeroCopyLiteralByteString.wrap(ZeroCopyLiteralByteString.java:41)
          	at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1363)
          	at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:830)
          	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433)
          	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186)
          	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147)
          	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270)
          	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100)
          	at org.apache.hadoop.hbase.mapreduce.RowCounter.createSubmittableJob(RowCounter.java:146)
          	at org.apache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.java:184)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:606)
          	at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
          
          Show
          stack added a comment - Enis Soztutar no luck. Slightly different version of same issue – but points for good attempt for sure: Exception in thread "main" java.lang.IllegalAccessError: tried to access class com.google.protobuf.LiteralByteString from class com.google.protobuf.ZeroCopyLiteralByteString at com.google.protobuf.ZeroCopyLiteralByteString.wrap(ZeroCopyLiteralByteString.java:41) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1363) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:830) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100) at org.apache.hadoop.hbase.mapreduce.RowCounter.createSubmittableJob(RowCounter.java:146) at org.apache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.java:184) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
          Hide
          stack added a comment -

          My 2. above is 'crazy' but actually way more than is actually needed. 2. only needs to have hbase-protocol on the MR client CLASSPATH, where the job is being launched from, and not under hadoop/lib on all hosts on the cluster (thanks to Alejandro Abdelnur and Elliott Clark for correcting my misunderstanding). One of the lads here has confirmed that something like the below 'works' for MRv1 and MRv2:

          $ export HADOOP_CLASSPATH=/usr/lib/hbase/lib/hbase-protocol-0.96.1.1-*-*-*.jar
          $ ./bin/hadoop jar FATJOBJARWITHHBASE.jar
          

          For example:

          $ export HADOOP_CLASSPATH="./hbase/hbase-protocol/target/hbase-protocol-0.99.0-SNAPSHOT.jar"
          $ ./hadoop-2.2.0/bin/hadoop --config /home/stack/conf_hadoop/ jar ./hbase/hbase-assembly/target/hbase-0.99.0-SNAPSHOT-job.jar  org.apache.hadoop.hbase.mapreduce.RowCounter usertable
          

          I tried it locally. It for sure gets over the MR client IllegalAccessError hurdle but I can't confirm the rowcounter mapreduce job actually runs to completion because I can't get a hadoop-2.2.0 yarn to work for me after spending a few hours on it. I'm giving up on it for the night.

          I could doc how to fix the exception with the above. This would remove this issue as blocker.

          This 'fix' is actually a Jimmy Xiang suggestion from sometime yesterday morning.

          Show
          stack added a comment - My 2. above is 'crazy' but actually way more than is actually needed. 2. only needs to have hbase-protocol on the MR client CLASSPATH, where the job is being launched from, and not under hadoop/lib on all hosts on the cluster (thanks to Alejandro Abdelnur and Elliott Clark for correcting my misunderstanding). One of the lads here has confirmed that something like the below 'works' for MRv1 and MRv2: $ export HADOOP_CLASSPATH=/usr/lib/hbase/lib/hbase-protocol-0.96.1.1-*-*-*.jar $ ./bin/hadoop jar FATJOBJARWITHHBASE.jar For example: $ export HADOOP_CLASSPATH= "./hbase/hbase-protocol/target/hbase-protocol-0.99.0-SNAPSHOT.jar" $ ./hadoop-2.2.0/bin/hadoop --config /home/stack/conf_hadoop/ jar ./hbase/hbase-assembly/target/hbase-0.99.0-SNAPSHOT-job.jar org.apache.hadoop.hbase.mapreduce.RowCounter usertable I tried it locally. It for sure gets over the MR client IllegalAccessError hurdle but I can't confirm the rowcounter mapreduce job actually runs to completion because I can't get a hadoop-2.2.0 yarn to work for me after spending a few hours on it. I'm giving up on it for the night. I could doc how to fix the exception with the above. This would remove this issue as blocker. This 'fix' is actually a Jimmy Xiang suggestion from sometime yesterday morning.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12622298/hbase-10304_not_tested.patch
          against trunk revision .
          ATTACHMENT ID: 12622298

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12622298/hbase-10304_not_tested.patch against trunk revision . ATTACHMENT ID: 12622298 +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8383//console This message is automatically generated.
          Hide
          Nick Dimiduk added a comment -

          Sounds like the recommended approach can be HADOOP_CLASSPATH=$(hbase mapredcp). That ensures everything we deem necessary is made available to the appropriate classloader.

          In the future, maybe we deprecate the fat jar? It should not be necessary given the add*DependencyJars magic in TableMapReduceUtil. Making those methods generally useful was the goal of the `hbase mapredcp` command.

          The esteemed Mr. Holmes recommends people use -libjars anyway.

          I think something like below should be the "standard" way to launch an HBase job. Java developers are used to thinking about the classpath, so I don't think it's a burden on anyone.

          $ HADOOP_CLASSPATH=$(hbase mapredcp) hadoop jar foo.jar MainClass
          

          or perhaps, if you're fancy

          $ HADOOP_CLASSPATH=/path/to/hbase_config:$(hbase mapredcp) hadoop jar foo.jar
          

          The above command make it explicitly clear what extra classpath entries are provided to Hadoop.

          Show
          Nick Dimiduk added a comment - Sounds like the recommended approach can be HADOOP_CLASSPATH=$(hbase mapredcp) . That ensures everything we deem necessary is made available to the appropriate classloader. In the future, maybe we deprecate the fat jar? It should not be necessary given the add*DependencyJars magic in TableMapReduceUtil. Making those methods generally useful was the goal of the `hbase mapredcp` command. The esteemed Mr. Holmes recommends people use -libjars anyway. I think something like below should be the "standard" way to launch an HBase job. Java developers are used to thinking about the classpath, so I don't think it's a burden on anyone. $ HADOOP_CLASSPATH=$(hbase mapredcp) hadoop jar foo.jar MainClass or perhaps, if you're fancy $ HADOOP_CLASSPATH=/path/to/hbase_config:$(hbase mapredcp) hadoop jar foo.jar The above command make it explicitly clear what extra classpath entries are provided to Hadoop.
          Hide
          Enis Soztutar added a comment -

          no luck. Slightly different version of same issue

          Yeah, it seems that Packages (as in package access) are scoped per ClassLoader per http://osdir.com/ml/windows.devel.java.advanced/2004-05/msg00039.html. TIL.

          Show
          Enis Soztutar added a comment - no luck. Slightly different version of same issue Yeah, it seems that Packages (as in package access) are scoped per ClassLoader per http://osdir.com/ml/windows.devel.java.advanced/2004-05/msg00039.html . TIL.
          Hide
          stack added a comment -

          Enis Soztutar That pointer helps.

          Show
          stack added a comment - Enis Soztutar That pointer helps.
          Hide
          Andrew Purtell added a comment -

          I think something like below should be the "standard" way to launch an HBase job. Java developers are used to thinking about the classpath, so I don't think it's a burden on anyone.

          $ HADOOP_CLASSPATH=$(hbase mapredcp) hadoop jar foo.jar MainClass
          

          or perhaps, if you're fancy

          $ HADOOP_CLASSPATH=/path/to/hbase_config:$(hbase mapredcp) hadoop jar foo.jar
          

          So can we get away with a doc change / manual update as the fix for this issue?

          Show
          Andrew Purtell added a comment - I think something like below should be the "standard" way to launch an HBase job. Java developers are used to thinking about the classpath, so I don't think it's a burden on anyone. $ HADOOP_CLASSPATH=$(hbase mapredcp) hadoop jar foo.jar MainClass or perhaps, if you're fancy $ HADOOP_CLASSPATH=/path/to/hbase_config:$(hbase mapredcp) hadoop jar foo.jar So can we get away with a doc change / manual update as the fix for this issue?
          Hide
          Jimmy Xiang added a comment -

          I have verified the two workarounds to work fine. With the fat hbase jobjar, I can run row counter and get correct results.

          1. run the job like

          HADOOP_CLASSPATH=/path/to/hbase_config:/path/to/hbase-protocol.jar hadoop jar fat-hbase-job.jar

          2. put the hbase-protocol jar under hadoop/lib so that MR can pick it up, and run the job as before

          +1 on doc change as the fix for this issue.

          Show
          Jimmy Xiang added a comment - I have verified the two workarounds to work fine. With the fat hbase jobjar, I can run row counter and get correct results. 1. run the job like HADOOP_CLASSPATH=/path/to/hbase_config:/path/to/hbase-protocol.jar hadoop jar fat-hbase-job.jar 2. put the hbase-protocol jar under hadoop/lib so that MR can pick it up, and run the job as before +1 on doc change as the fix for this issue.
          Hide
          stack added a comment -

          Agree with Jimmy Xiang

          I could have a go at it, np, but Nick Dimiduk, you have an opinion on where we should be going that I like ("Deprecate fat job jar....") and you are a better writer... do you want to do up a doc patch?

          Show
          stack added a comment - Agree with Jimmy Xiang I could have a go at it, np, but Nick Dimiduk , you have an opinion on where we should be going that I like ("Deprecate fat job jar....") and you are a better writer... do you want to do up a doc patch?
          Hide
          Nick Dimiduk added a comment -

          Sure, I can write something up. I suppose there's no need to deprecate the "fat jar" approach so long as the docs are clear.

          Show
          Nick Dimiduk added a comment - Sure, I can write something up. I suppose there's no need to deprecate the "fat jar" approach so long as the docs are clear.
          Hide
          Nick Dimiduk added a comment -

          Here's some copy we can use. Where in the book would you want something like this to live? I also suggest the package-info be updated as well.

          Problem

          Mapreduce jobs submitted to the cluster via a "fat jar," that is, a jar containing a 'lib' directory with their runtime dependencies, fail to launch. The symptom is an exception similar to the following:

          Exception in thread "main" java.lang.IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString
          	at java.lang.ClassLoader.defineClass1(Native Method)
          	at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
          	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
          	at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
          	at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
          	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
          	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
          	at java.security.AccessController.doPrivileged(Native Method)
          	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
          	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
          	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
          	at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:818)
          	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433)
          	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186)
          	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147)
          	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270)
          	at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100)
          ...
          

          This is because of an optimization introduced in HBASE-9867 that inadvertently introduced a classloader dependency.

          Jobs submitted using a regular jar and specifying their runtime dependencies using the -libjars parameter are not affected by this regression. More details about using the -libjars parameter are available in this blog post.

          Solution

          In order to satisfy the new classloader requirements, hbase-protocol.jar must be included in Hadoop's classpath. This can be resolved system-wide by including a reference to the hbase-protocol.jar in hadoop's lib directory, via a symlink or by copying the jar into the new location.

          This can also be achieved on a per-job launch basis by specifying a value for HADOOP_CLASSPATH at job submission time. All three of the following job launching commands satisfy this requirement:

          $ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar hadoop jar MyJob.jar MyJobMainClass
          $ HADOOP_CLASSPATH=$(hbase mapredcp) hadoop jar MyJob.jar MyJobMainClass
          $ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass
          

          Apache Reference JIRA

          See also HBASE-10304.

          Show
          Nick Dimiduk added a comment - Here's some copy we can use. Where in the book would you want something like this to live? I also suggest the package-info be updated as well. Problem Mapreduce jobs submitted to the cluster via a "fat jar," that is, a jar containing a 'lib' directory with their runtime dependencies, fail to launch. The symptom is an exception similar to the following: Exception in thread "main" java.lang.IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:792) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:818) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270) at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100) ... This is because of an optimization introduced in HBASE-9867 that inadvertently introduced a classloader dependency. Jobs submitted using a regular jar and specifying their runtime dependencies using the -libjars parameter are not affected by this regression. More details about using the -libjars parameter are available in this blog post . Solution In order to satisfy the new classloader requirements, hbase-protocol.jar must be included in Hadoop's classpath. This can be resolved system-wide by including a reference to the hbase-protocol.jar in hadoop's lib directory, via a symlink or by copying the jar into the new location. This can also be achieved on a per-job launch basis by specifying a value for HADOOP_CLASSPATH at job submission time. All three of the following job launching commands satisfy this requirement: $ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar hadoop jar MyJob.jar MyJobMainClass $ HADOOP_CLASSPATH=$(hbase mapredcp) hadoop jar MyJob.jar MyJobMainClass $ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass Apache Reference JIRA See also HBASE-10304 .
          Hide
          Jimmy Xiang added a comment -

          I tried with -libjars, and it gave me the same problem. So it is not working for me.

          I also tried the three suggestions. The first two of them need some tweaking, while the third one work as-is.

          $ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar hadoop jar MyJob.jar MyJobMainClass

          I got this:

          14/01/10 15:31:05 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
          java.net.ConnectException: Connection refused
          	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
          	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)
          	at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
          	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
          

          Basically, I can't connect to the ZK. I have to add the hbase conf dir as below:

          $ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase-conf hadoop jar MyJob.jar MyJobMainClass
          

          $ HADOOP_CLASSPATH=$(hbase mapredcp) hadoop jar MyJob.jar MyJobMainClass

          Same as above. I need to add hbase conf dir to the path:

          $ HADOOP_CLASSPATH=$(hbase mapredcp):/path/to/hbase-conf hadoop jar MyJob.jar MyJobMainClass
          

          $ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass

          Works for me.

          Show
          Jimmy Xiang added a comment - I tried with -libjars, and it gave me the same problem. So it is not working for me. I also tried the three suggestions. The first two of them need some tweaking, while the third one work as-is. $ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar hadoop jar MyJob.jar MyJobMainClass I got this: 14/01/10 15:31:05 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068) Basically, I can't connect to the ZK. I have to add the hbase conf dir as below: $ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase-conf hadoop jar MyJob.jar MyJobMainClass $ HADOOP_CLASSPATH=$(hbase mapredcp) hadoop jar MyJob.jar MyJobMainClass Same as above. I need to add hbase conf dir to the path: $ HADOOP_CLASSPATH=$(hbase mapredcp):/path/to/hbase-conf hadoop jar MyJob.jar MyJobMainClass $ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass Works for me.
          Hide
          Nick Dimiduk added a comment -

          Jimmy Xiang the only thing you needed to tweak for the first two variations was explicit inclusion of the hbase-config in $HADOOP_CLASSPATH ? Where else would the hadoop invocation pick up hbase-site.xml? Adding hbase-config in this invocation method has always been required, right?

          What about launching the job using our bin/hbase script? Do you see the same IllegalAccessError when launching the fat jar that way?

          Show
          Nick Dimiduk added a comment - Jimmy Xiang the only thing you needed to tweak for the first two variations was explicit inclusion of the hbase-config in $HADOOP_CLASSPATH ? Where else would the hadoop invocation pick up hbase-site.xml? Adding hbase-config in this invocation method has always been required, right? What about launching the job using our bin/hbase script? Do you see the same IllegalAccessError when launching the fat jar that way?
          Hide
          Jimmy Xiang added a comment -

          Makes sense. bin/hbase script doesn't accept command jar. It may need some tweak to work.

          Show
          Jimmy Xiang added a comment - Makes sense. bin/hbase script doesn't accept command jar. It may need some tweak to work.
          Hide
          Nick Dimiduk added a comment -

          No jar command required. I'm thinking:

          HBASE_CLASSPATH=/path/to/my/myjob-fat.jar bin/hbase MyJobMainClass

          I'm getting my rig setup over here so I can repro and experiment a little more constructively. More to follow.

          Show
          Nick Dimiduk added a comment - No jar command required. I'm thinking: HBASE_CLASSPATH=/path/to/my/myjob-fat.jar bin/hbase MyJobMainClass I'm getting my rig setup over here so I can repro and experiment a little more constructively. More to follow.
          Hide
          Nick Dimiduk added a comment -

          Hmm. Even this version isn't working for me, same attempting to hit zookeeper on localhost.

          $ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass
          

          So basically, none of my proposed solutions work with my sample application There's two different issues though: one being the topic of this ticket, the other being locating hbase-site.xml.

          Show
          Nick Dimiduk added a comment - Hmm. Even this version isn't working for me, same attempting to hit zookeeper on localhost. $ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass So basically, none of my proposed solutions work with my sample application There's two different issues though: one being the topic of this ticket, the other being locating hbase-site.xml.
          Hide
          Nick Dimiduk added a comment -

          hbase-site.xml issue was in my application.

          I have confirmed reproduced this bug on a 5-node Hadoop 2 cluster.

          The following invocations trigger the bug, as reported:

          $ hadoop jar MyApp-job.jar ...
          $ HADOOP_CLASSPATH=/etc/hbase/conf hadoop jar MyApp-job.jar ...
          

          The following invocations all result in running applications, both local applications and MRv2 jobs:

          $ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/etc/hbase/conf hadoop jar MyApp-job.jar ...
          $ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp-job.jar ...
          $ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyApp-job.jar ...
          $ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp.jar MyJobMainClass -libjars $(hbase mapredcp | tr ':' ',') ...
          

          Notice the last version that makes uses of the -libjars feature with a jar containing only application code.

          Show
          Nick Dimiduk added a comment - hbase-site.xml issue was in my application. I have confirmed reproduced this bug on a 5-node Hadoop 2 cluster. The following invocations trigger the bug, as reported: $ hadoop jar MyApp-job.jar ... $ HADOOP_CLASSPATH=/etc/hbase/conf hadoop jar MyApp-job.jar ... The following invocations all result in running applications, both local applications and MRv2 jobs: $ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/etc/hbase/conf hadoop jar MyApp-job.jar ... $ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp-job.jar ... $ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyApp-job.jar ... $ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp.jar MyJobMainClass -libjars $(hbase mapredcp | tr ':' ',') ... Notice the last version that makes uses of the -libjars feature with a jar containing only application code.
          Hide
          Nick Dimiduk added a comment -

          Here's my reference application.

          https://github.com/ndimiduk/hbase-fatjar

          Show
          Nick Dimiduk added a comment - Here's my reference application. https://github.com/ndimiduk/hbase-fatjar
          Hide
          Nick Dimiduk added a comment -

          Jimmy Xiang mind comparing notes, see if the above works for you and see if you hit any edge cases I didn't? Then I'll update the statement above and provide a doc patch.

          Thanks.

          Show
          Nick Dimiduk added a comment - Jimmy Xiang mind comparing notes, see if the above works for you and see if you hit any edge cases I didn't? Then I'll update the statement above and provide a doc patch. Thanks.
          Hide
          Jimmy Xiang added a comment -

          Yes, the four invocations all work for me. Thanks.

          Show
          Jimmy Xiang added a comment - Yes, the four invocations all work for me. Thanks.
          Hide
          Jimmy Xiang added a comment -

          I tried to put hbase-site.xml in the fat jar top level, it also works if I don't specify the conf dir in HADOOP_CLASSPATH.

          Show
          Jimmy Xiang added a comment - I tried to put hbase-site.xml in the fat jar top level, it also works if I don't specify the conf dir in HADOOP_CLASSPATH.
          Hide
          Nick Dimiduk added a comment -

          Here's an update for the book. Let me know what you think.

          Show
          Nick Dimiduk added a comment - Here's an update for the book. Let me know what you think.
          Hide
          Jimmy Xiang added a comment -

          The content looks good to me.

          Show
          Jimmy Xiang added a comment - The content looks good to me.
          Hide
          stack added a comment -

          Nice work in here lads (Jimmy and Nick). +1 on the quality doc.

          Show
          stack added a comment - Nice work in here lads (Jimmy and Nick). +1 on the quality doc.
          Hide
          Nick Dimiduk added a comment -

          Good deal. I'll commit after the site build on QABot passes.

          stack should I also try my hand at updating the site? I see you've documented it very nicely...

          Show
          Nick Dimiduk added a comment - Good deal. I'll commit after the site build on QABot passes. stack should I also try my hand at updating the site? I see you've documented it very nicely...
          Hide
          Andrew Purtell added a comment -

          Good deal. I'll commit after the site build on QABot passes.

          Isn't HadoopQA always failing the site builds?

          Show
          Andrew Purtell added a comment - Good deal. I'll commit after the site build on QABot passes. Isn't HadoopQA always failing the site builds?
          Hide
          stack added a comment -

          Yeah. I think Enis filed an issue on why. Would just do an xmllint on the file after your patch goes in Nick. If it works, apply. If you were up for deploying the site, that'd be sweet. Ping if you need any pointers or you run into blocks.

          Show
          stack added a comment - Yeah. I think Enis filed an issue on why. Would just do an xmllint on the file after your patch goes in Nick. If it works, apply. If you were up for deploying the site, that'd be sweet. Ping if you need any pointers or you run into blocks.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12622998/HBASE-10304.docbook.patch
          against trunk revision .
          ATTACHMENT ID: 12622998

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces the following lines longer than 100:
          + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433)
          + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186)
          + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147)
          + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270)
          + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100)
          +$ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass
          +$ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp.jar MyJobMainClass -libjars $(hbase mapredcp | tr ':' ',') ...

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12622998/HBASE-10304.docbook.patch against trunk revision . ATTACHMENT ID: 12622998 +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433) + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186) + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147) + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270) + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100) +$ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass +$ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp.jar MyJobMainClass -libjars $(hbase mapredcp | tr ':' ',') ... -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8427//console This message is automatically generated.
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-0.98 #84 (See https://builds.apache.org/job/HBase-0.98/84/)
          HBASE-10304 [docbook update] Running an hbase job jar: IllegalAccessError (ndimiduk: rev 1558497)

          • /hbase/branches/0.98/src/main/docbkx/book.xml
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-0.98 #84 (See https://builds.apache.org/job/HBase-0.98/84/ ) HBASE-10304 [docbook update] Running an hbase job jar: IllegalAccessError (ndimiduk: rev 1558497) /hbase/branches/0.98/src/main/docbkx/book.xml
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK #4822 (See https://builds.apache.org/job/HBase-TRUNK/4822/)
          HBASE-10304 [docbook update] Running an hbase job jar: IllegalAccessError (ndimiduk: rev 1558490)

          • /hbase/trunk/src/main/docbkx/book.xml
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK #4822 (See https://builds.apache.org/job/HBase-TRUNK/4822/ ) HBASE-10304 [docbook update] Running an hbase job jar: IllegalAccessError (ndimiduk: rev 1558490) /hbase/trunk/src/main/docbkx/book.xml
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #77 (See https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/77/)
          HBASE-10304 [docbook update] Running an hbase job jar: IllegalAccessError (ndimiduk: rev 1558497)

          • /hbase/branches/0.98/src/main/docbkx/book.xml
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #77 (See https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/77/ ) HBASE-10304 [docbook update] Running an hbase job jar: IllegalAccessError (ndimiduk: rev 1558497) /hbase/branches/0.98/src/main/docbkx/book.xml
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #54 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/54/)
          HBASE-10304 [docbook update] Running an hbase job jar: IllegalAccessError (ndimiduk: rev 1558490)

          • /hbase/trunk/src/main/docbkx/book.xml
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #54 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/54/ ) HBASE-10304 [docbook update] Running an hbase job jar: IllegalAccessError (ndimiduk: rev 1558490) /hbase/trunk/src/main/docbkx/book.xml
          Hide
          Arijit Banerjee added a comment -

          What is the workaround for running such application through Oozie? Setting HADOOP_CLASSPATH in Java and MapReduce actions are not possible. There seems to be no provision to do that.

          Show
          Arijit Banerjee added a comment - What is the workaround for running such application through Oozie? Setting HADOOP_CLASSPATH in Java and MapReduce actions are not possible. There seems to be no provision to do that.
          Hide
          Nick Dimiduk added a comment -

          This was resolved via HBASE-11118. Please see my comment at the end of that ticket.

          Show
          Nick Dimiduk added a comment - This was resolved via HBASE-11118 . Please see my comment at the end of that ticket.
          Hide
          Enis Soztutar added a comment -

          Closing this issue after 0.99.0 release.

          Show
          Enis Soztutar added a comment - Closing this issue after 0.99.0 release.

            People

            • Assignee:
              Nick Dimiduk
              Reporter:
              stack
            • Votes:
              0 Vote for this issue
              Watchers:
              15 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development