HBase
  1. HBase
  2. HBASE-4050

Update HBase metrics framework to metrics2 framework

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.90.4
    • Fix Version/s: 0.95.2
    • Component/s: metrics
    • Labels:
      None
    • Environment:

      Java 6

    • Tags:
      0.96notable

      Description

      Metrics Framework has been marked deprecated in Hadoop 0.20.203+ and 0.22+, and it might get removed in future Hadoop release. Hence, HBase needs to revise the dependency of MetricsContext to use Metrics2 framework.

      1. HBASE-4050-8.patch
        99 kB
        Elliott Clark
      2. HBASE-4050-8_1.patch
        20 kB
        Alex Baranau
      3. HBASE-4050-7.patch
        98 kB
        Elliott Clark
      4. HBASE-4050-6.patch
        98 kB
        Elliott Clark
      5. HBASE-4050-5.patch
        97 kB
        Elliott Clark
      6. HBASE-4050-3.patch
        98 kB
        Elliott Clark
      7. HBASE-4050-2.patch
        123 kB
        Elliott Clark
      8. HBASE-4050-1.patch
        121 kB
        Elliott Clark
      9. HBASE-4050-0.patch
        121 kB
        Elliott Clark
      10. HBASE-4050.patch
        13 kB
        Alex Baranau
      11. 4050-metrics-v3.patch
        23 kB
        Alex Baranau
      12. 4050-metrics-v2.patch
        13 kB
        Ted Yu

        Issue Links

          Activity

          Hide
          stack added a comment -

          Made this critical for 0.94 (its not going to happen for 0.92; it'd probably be fine if 0.92 only ran on 0.20+0.22 hadoops... 0.94 hbase should be able to do 0.20, 0.22, and 0.23 hadoops).

          Show
          stack added a comment - Made this critical for 0.94 (its not going to happen for 0.92; it'd probably be fine if 0.92 only ran on 0.20+0.22 hadoops... 0.94 hbase should be able to do 0.20, 0.22, and 0.23 hadoops).
          Hide
          Otis Gospodnetic added a comment -

          Stack mentioned this is critical for 0.94, but Fix Version/s says "None".
          Is anyone working on this by any chance?

          Show
          Otis Gospodnetic added a comment - Stack mentioned this is critical for 0.94, but Fix Version/s says "None". Is anyone working on this by any chance?
          Hide
          Ted Yu added a comment -

          @Otis:
          It has been 11 months.
          Please take this if you have time.

          Show
          Ted Yu added a comment - @Otis: It has been 11 months. Please take this if you have time.
          Hide
          Alex Baranau added a comment -

          I'll try to get to this in 3 weeks (we want this to be done a lot), if anyone wants to jump on this sooner, that would be great, too

          Show
          Alex Baranau added a comment - I'll try to get to this in 3 weeks (we want this to be done a lot), if anyone wants to jump on this sooner, that would be great, too
          Hide
          Ted Yu added a comment -

          3 weeks flew by

          Show
          Ted Yu added a comment - 3 weeks flew by
          Hide
          Alex Baranau added a comment -

          True!

          There's an ongoing effort to sum up all the work we need to do around metrics, all jiras, etc.: http://search-hadoop.com/m/DBMT21ENvX4. I.e. before hopping on this issue.

          At the same time I have a feeling we might want to attack this one first/anyways/independently.

          We may want to have old metrics to be still available (some existing systems may rely on them) while exposing new ones (in a new way). Or we don't?

          Having said that we can only integrate metrics2 framework within this issue. And add more and more metrics based on what we conclude while summing up many metrics issues (see above). I.e. the effort of this issue would be just integrating the metrics2 framework and adding some basic metrics. If we are 100% for going with metrics2 framework.

          Thoughts?

          Show
          Alex Baranau added a comment - True! There's an ongoing effort to sum up all the work we need to do around metrics, all jiras, etc.: http://search-hadoop.com/m/DBMT21ENvX4 . I.e. before hopping on this issue. At the same time I have a feeling we might want to attack this one first/anyways/independently. We may want to have old metrics to be still available (some existing systems may rely on them) while exposing new ones (in a new way). Or we don't? Having said that we can only integrate metrics2 framework within this issue. And add more and more metrics based on what we conclude while summing up many metrics issues (see above). I.e. the effort of this issue would be just integrating the metrics2 framework and adding some basic metrics. If we are 100% for going with metrics2 framework. Thoughts?
          Hide
          Ted Yu added a comment -

          the effort of this issue would be just integrating the metrics2 framework and adding some basic metrics.

          Sounds good.

          Show
          Ted Yu added a comment - the effort of this issue would be just integrating the metrics2 framework and adding some basic metrics. Sounds good.
          Hide
          Ted Yu added a comment -

          The following came from Andy under 'Metrics in 0.96' discussion:

          > Meanwhile I suggested taking on this one:
          > https://issues.apache.org/jira/browse/HBASE-4050 separately/first. What do
          > you think?

          +1

          In related discussion, it looks to me like Stack thinks pulling in the
          histogram accuracy improvement work via metrics2 is a good idea.

          Independent of this something I would like to be able to do is log
          metrics to a file as well as Ganglia concurrently, and metrics2 can do
          that.

          Show
          Ted Yu added a comment - The following came from Andy under 'Metrics in 0.96' discussion: > Meanwhile I suggested taking on this one: > https://issues.apache.org/jira/browse/HBASE-4050 separately/first. What do > you think? +1 In related discussion, it looks to me like Stack thinks pulling in the histogram accuracy improvement work via metrics2 is a good idea. Independent of this something I would like to be able to do is log metrics to a file as well as Ganglia concurrently, and metrics2 can do that.
          Hide
          Alex Baranau added a comment -

          Attached very first version of a patch.

          Note: since HBase trunk points to 1.0.3 version of Hadoop, we cannot use fancy annotations of metrics2 framework which were added later. But still things should look clean. As far as I understand, if we decide to switch to using annotations, we should be OK: external system will not notice the change.
          Note2: I only added new metrics for HMaster so far.

          As we agreed, I only integrated new framework, left old metrics as they are. Hence, sorry for weird "V2"-postfixed names in this initial patch.
          Also, as agreed, I've added only single metric to be exposed: cluster_requests. Tried to add unit-test for it (wasn't very easy to test that metric, I have a fear that this unit-test may fail sometimes because the fake request in this test not the only one that can happen at the time of execution).

          Anyhow, will work further on this.

          Show
          Alex Baranau added a comment - Attached very first version of a patch. Note: since HBase trunk points to 1.0.3 version of Hadoop, we cannot use fancy annotations of metrics2 framework which were added later. But still things should look clean. As far as I understand, if we decide to switch to using annotations, we should be OK: external system will not notice the change. Note2: I only added new metrics for HMaster so far. As we agreed, I only integrated new framework, left old metrics as they are. Hence, sorry for weird "V2"-postfixed names in this initial patch. Also, as agreed, I've added only single metric to be exposed: cluster_requests. Tried to add unit-test for it (wasn't very easy to test that metric, I have a fear that this unit-test may fail sometimes because the fake request in this test not the only one that can happen at the time of execution). Anyhow, will work further on this.
          Hide
          Ted Yu added a comment -
          +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsV2.java  (revision )
          

          We can introduce metrics2 package and put MasterMetrics.java there, what do you think ?

          +
          +public class TestMasterMetrics {
          

          The above class is a small test, right ?

          I assume, once complete, the Metrics2 metrics would substitute old metrics.

          Show
          Ted Yu added a comment - +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsV2.java (revision ) We can introduce metrics2 package and put MasterMetrics.java there, what do you think ? + + public class TestMasterMetrics { The above class is a small test, right ? I assume, once complete, the Metrics2 metrics would substitute old metrics.
          Hide
          Ted Yu added a comment -

          Since cluster is involved in the test, I categorize it as medium.

          Show
          Ted Yu added a comment - Since cluster is involved in the test, I categorize it as medium.
          Hide
          Ted Yu added a comment -

          The reason Hadoop QA didn't get back was because of the OutOfMemoryError compiling against hadoop 2.0:

          [INFO] Compiling 765 source files to /Users/zhihyu/trunk-hbase/hbase-server/target/classes
          [INFO]                                                                         
          [INFO] ------------------------------------------------------------------------
          [INFO] Skipping HBase - Server
          [INFO] This project has been banned from the build due to previous failures.
          [INFO] ------------------------------------------------------------------------
          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD FAILURE
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 12:10.921s
          [INFO] Finished at: Sat Jul 07 03:25:44 PDT 2012
          [INFO] Final Memory: 35M/123M
          [INFO] ------------------------------------------------------------------------
          [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.0.2:compile (default-compile) on project hbase-server: Fatal error compiling: Error while executing the compiler. InvocationTargetException: Java heap space -> [Help 1]
          org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.0.2:compile (default-compile) on project hbase-server: Fatal error compiling
          	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217)
          	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
          	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
          	at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
          	at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
          	at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
          	at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
          	at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319)
          	at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
          	at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
          	at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
          	at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          	at java.lang.reflect.Method.invoke(Method.java:597)
          	at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
          	at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
          	at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
          	at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
          Caused by: org.apache.maven.plugin.MojoExecutionException: Fatal error compiling
          	at org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:498)
          	at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:114)
          	at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
          	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
          	... 19 more
          Caused by: org.codehaus.plexus.compiler.CompilerException: Error while executing the compiler.
          	at org.codehaus.plexus.compiler.javac.JavacCompiler.compileInProcess(JavacCompiler.java:434)
          	at org.codehaus.plexus.compiler.javac.JavacCompiler.compile(JavacCompiler.java:141)
          	at org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:493)
          	... 22 more
          Caused by: java.lang.reflect.InvocationTargetException
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          	at java.lang.reflect.Method.invoke(Method.java:597)
          	at org.codehaus.plexus.compiler.javac.JavacCompiler.compileInProcess(JavacCompiler.java:420)
          	... 24 more
          Caused by: java.lang.OutOfMemoryError: Java heap space
          

          Here is the command I used:

          mvn clean test help:active-profiles -X -DskipTests -Dhadoop.profile=2.0
          
          Show
          Ted Yu added a comment - The reason Hadoop QA didn't get back was because of the OutOfMemoryError compiling against hadoop 2.0: [INFO] Compiling 765 source files to /Users/zhihyu/trunk-hbase/hbase-server/target/classes [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Skipping HBase - Server [INFO] This project has been banned from the build due to previous failures. [INFO] ------------------------------------------------------------------------ [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 12:10.921s [INFO] Finished at: Sat Jul 07 03:25:44 PDT 2012 [INFO] Final Memory: 35M/123M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.0.2:compile ( default -compile) on project hbase-server: Fatal error compiling: Error while executing the compiler. InvocationTargetException: Java heap space -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.0.2:compile ( default -compile) on project hbase-server: Fatal error compiling at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) Caused by: org.apache.maven.plugin.MojoExecutionException: Fatal error compiling at org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:498) at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:114) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) ... 19 more Caused by: org.codehaus.plexus.compiler.CompilerException: Error while executing the compiler. at org.codehaus.plexus.compiler.javac.JavacCompiler.compileInProcess(JavacCompiler.java:434) at org.codehaus.plexus.compiler.javac.JavacCompiler.compile(JavacCompiler.java:141) at org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:493) ... 22 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.plexus.compiler.javac.JavacCompiler.compileInProcess(JavacCompiler.java:420) ... 24 more Caused by: java.lang.OutOfMemoryError: Java heap space Here is the command I used: mvn clean test help:active-profiles -X -DskipTests -Dhadoop.profile=2.0
          Hide
          Alex Baranau added a comment -

          The problem with building against hadoop 2.0 (and same is with hadoop 3.0 I believe) is that those classes in metrics2 package were renamed. So I got not OOME, but compilation errors, like these (showing not all):

          [ERROR] /Users/alex/shared/hbase-trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsV2.java:[25,37] cannot find symbol
          [ERROR] symbol  : class MetricMutableCounterInt
          [ERROR] location: package org.apache.hadoop.metrics2.lib
          [ERROR] 
          [ERROR] /Users/alex/shared/hbase-trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsV2.java:[27,40] cannot find symbol
          [ERROR] symbol  : class JvmMetricsSource
          [ERROR] location: package org.apache.hadoop.metrics2.source
          

          In hadoop 2.0+ these classes were renamed (not only these two):
          MetricMutableCounterInt -> MutableCounterInt
          JvmMetricsSource -> JvmMetrics

          How soon do you think HBase can leave off dependency on hadoop 1.0?
          Any suggestions about this situation?

          Show
          Alex Baranau added a comment - The problem with building against hadoop 2.0 (and same is with hadoop 3.0 I believe) is that those classes in metrics2 package were renamed. So I got not OOME, but compilation errors, like these (showing not all): [ERROR] /Users/alex/shared/hbase-trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsV2.java:[25,37] cannot find symbol [ERROR] symbol : class MetricMutableCounterInt [ERROR] location: package org.apache.hadoop.metrics2.lib [ERROR] [ERROR] /Users/alex/shared/hbase-trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsV2.java:[27,40] cannot find symbol [ERROR] symbol : class JvmMetricsSource [ERROR] location: package org.apache.hadoop.metrics2.source In hadoop 2.0+ these classes were renamed (not only these two): MetricMutableCounterInt -> MutableCounterInt JvmMetricsSource -> JvmMetrics How soon do you think HBase can leave off dependency on hadoop 1.0? Any suggestions about this situation?
          Hide
          Alex Baranau added a comment -

          We can introduce metrics2 package and put MasterMetrics.java there, what do you think ?

          Had same thought, but didn't want to have in HMaster two different classes used with same name, the code will look a bit ugly.
          What I think is better than "V2" is to call those metrics classes "HMasterMetrics" and "HRegionServerMetrics", while old ones are without "H" prefix. These names seems to me not that ugly.

          I assume, once complete, the Metrics2 metrics would substitute old metrics

          I believe so. Though removing older metrics very soon might be not an easy decision: I think there are home-grown systems/scripts for cluster monitoring out there which rely on them. So we may want to be polite and keep both in first release with metrics2 included and say that old ones will be removed in future. I think this needs broader discussion

          Show
          Alex Baranau added a comment - We can introduce metrics2 package and put MasterMetrics.java there, what do you think ? Had same thought, but didn't want to have in HMaster two different classes used with same name, the code will look a bit ugly. What I think is better than "V2" is to call those metrics classes "HMasterMetrics" and "HRegionServerMetrics", while old ones are without "H" prefix. These names seems to me not that ugly. I assume, once complete, the Metrics2 metrics would substitute old metrics I believe so. Though removing older metrics very soon might be not an easy decision: I think there are home-grown systems/scripts for cluster monitoring out there which rely on them. So we may want to be polite and keep both in first release with metrics2 included and say that old ones will be removed in future. I think this needs broader discussion
          Hide
          Ted Yu added a comment -

          How soon do you think HBase can leave off dependency on hadoop 1.0?

          hadoop 1.0 is considered the stable release in the near future. We have to accommodate both hadoop 1.0 and hadoop 2.0
          Certain form of shim would be needed.
          Take a look at src/main/java/org/apache/hadoop/hbase/util/ShutdownHookManager.java which I introduced in HBASE-5963

          I will send out a poll on dev@ list for when old metrics should be removed.

          Show
          Ted Yu added a comment - How soon do you think HBase can leave off dependency on hadoop 1.0? hadoop 1.0 is considered the stable release in the near future. We have to accommodate both hadoop 1.0 and hadoop 2.0 Certain form of shim would be needed. Take a look at src/main/java/org/apache/hadoop/hbase/util/ShutdownHookManager.java which I introduced in HBASE-5963 I will send out a poll on dev@ list for when old metrics should be removed.
          Hide
          Alex Baranau added a comment -

          Added metrics v2 to HRegionServer. Renamed "v2" from the initial patch as per may previous comment.

          HRegionServerMetrics for now has only one gauge "regions #".

          Added simplest unit-test. Also checked (locally) that new metrics are exposed via JMX.

          I did integration with metrics2 by analogy with DataNode metrics, some things (esp. naming) should be reviewed.

          Next things:

          • make it work with hadoop 2.0+ (the shim)
          • test at least one sink, e.g. FileSink, in addition to checking if things exposed by JMX
          • test perf affect. Not sure this is needed...
          • add more metrics in new classes. I think we decided not to do it as a part of this issue, as metrics in general should be reworked.
          • review patch + fix found issues
          Show
          Alex Baranau added a comment - Added metrics v2 to HRegionServer. Renamed "v2" from the initial patch as per may previous comment. HRegionServerMetrics for now has only one gauge "regions #". Added simplest unit-test. Also checked (locally) that new metrics are exposed via JMX. I did integration with metrics2 by analogy with DataNode metrics, some things (esp. naming) should be reviewed. Next things: make it work with hadoop 2.0+ (the shim) test at least one sink, e.g. FileSink, in addition to checking if things exposed by JMX test perf affect. Not sure this is needed... add more metrics in new classes. I think we decided not to do it as a part of this issue, as metrics in general should be reworked. review patch + fix found issues
          Hide
          Ted Yu added a comment -

          As Todd pointed out on dev mailing list, we can be flexible with the class names for the new metrics classes as long as we try to keep user-facing (JMX) interfaces the same with those from 0.94.
          The new metrics classes should be annotated with @InterfaceAudience.Private

          Show
          Ted Yu added a comment - As Todd pointed out on dev mailing list, we can be flexible with the class names for the new metrics classes as long as we try to keep user-facing (JMX) interfaces the same with those from 0.94. The new metrics classes should be annotated with @InterfaceAudience.Private
          Hide
          Alex Baranau added a comment -

          So, about the shim. Haven't done that kind of thing before, so please correct me where/if I'm wrong.

          Background:
          There are quite a lot (>10) classes we want to use in HBase code which are different in Hadoop 1.0+ and 2.0+ from metrics2 framework.

          There are two options basically.

          1. Create shim classes for every class we want to use from Hadoop metrics2 and which are different in Hadoop 1.0+ and 2.0+ codebase.

          Shim class is basically consists of:

          • interface (may be abstract class) similar (or exact same, depending on which methods we want to use from it)
          • multiple implementations of this interface, each uses reflection to dynamically load the class from hadoop (based on which hadoop version we should use) and propagate methods invocation to it

          Pros: no extra/external jars to be loaded, all code is in main HBase maven module
          Cons: a lot of code which uses reflection (i.e. ugly code, hard to maintain, + bad performance)

          2. Create shim classes for "central/factory" classes, add only interfaces for classes which we want to use from Hadoop metrics2 and which are different in Hadoop 1.0+ and 2.0+ codebase, extract actual implementation of these interfaces into shim jars and load them depending on which Hadoop version we need to use.

          In other words:

          • add only interfaces in HBase codebase (in new metrics2 package) for classes of metrics2 framework we want to use
          • create shim classes for "central/factory" classes, such as MetricsRegistry: interface + code to load the actual implementation by classname
          • create two shim jars (e.g. "metrics2-hadoop-1.0-shim.jar" and "metrics2-hadoop-2.0-shim.jar") each with implementation of the interfaces above using classes from the corresponded hadoop version
          • add ShimLoader class which, at runtime finds out which hadoop version is used and which shim jar should be loaded, and loads the correspondent jar

          Pros: minimum reflection (just for initializing "central/factory" classes, once)
          Cons: needs loading jars (not sure this is a con), probably we'll have to add separate maven modules to develop two shim jars

          Show
          Alex Baranau added a comment - So, about the shim. Haven't done that kind of thing before, so please correct me where/if I'm wrong. Background: There are quite a lot (>10) classes we want to use in HBase code which are different in Hadoop 1.0+ and 2.0+ from metrics2 framework. There are two options basically. 1. Create shim classes for every class we want to use from Hadoop metrics2 and which are different in Hadoop 1.0+ and 2.0+ codebase. Shim class is basically consists of: interface (may be abstract class) similar (or exact same, depending on which methods we want to use from it) multiple implementations of this interface, each uses reflection to dynamically load the class from hadoop (based on which hadoop version we should use) and propagate methods invocation to it Pros : no extra/external jars to be loaded, all code is in main HBase maven module Cons : a lot of code which uses reflection (i.e. ugly code, hard to maintain, + bad performance) 2. Create shim classes for "central/factory" classes , add only interfaces for classes which we want to use from Hadoop metrics2 and which are different in Hadoop 1.0+ and 2.0+ codebase, extract actual implementation of these interfaces into shim jars and load them depending on which Hadoop version we need to use. In other words: add only interfaces in HBase codebase (in new metrics2 package) for classes of metrics2 framework we want to use create shim classes for "central/factory" classes, such as MetricsRegistry: interface + code to load the actual implementation by classname create two shim jars (e.g. "metrics2-hadoop-1.0-shim.jar" and "metrics2-hadoop-2.0-shim.jar") each with implementation of the interfaces above using classes from the corresponded hadoop version add ShimLoader class which, at runtime finds out which hadoop version is used and which shim jar should be loaded, and loads the correspondent jar Pros : minimum reflection (just for initializing "central/factory" classes, once) Cons : needs loading jars (not sure this is a con), probably we'll have to add separate maven modules to develop two shim jars
          Hide
          Alex Baranau added a comment -

          If the above correct/sensible, I'd say I'm for 2nd option.

          P.S. Sorry for a lot of text above.

          Show
          Alex Baranau added a comment - If the above correct/sensible, I'd say I'm for 2nd option. P.S. Sorry for a lot of text above.
          Hide
          Elliott Clark added a comment -

          The reflection option seems like it would be a big perf hit. Reflection basically means that nothing ever gets inlined so all function calls into metrics2 code would be very expensive. Since it seems like we are adding more and more instrumentation this perf impact would only grow. In addition as more hadoop versions come out all of our reflection code would get much more complicated and brittle.

          The conditionally loaded jar would be nice in that the JIT would only see one version of the factory classes on the classpath and everything could be optimized just like any other jvm run jit'd code. In addition there are other places that we use reflection to do things conditionally and a conditionally loaded jar would be nice.

          So I'm for #2.

          Show
          Elliott Clark added a comment - The reflection option seems like it would be a big perf hit. Reflection basically means that nothing ever gets inlined so all function calls into metrics2 code would be very expensive. Since it seems like we are adding more and more instrumentation this perf impact would only grow. In addition as more hadoop versions come out all of our reflection code would get much more complicated and brittle. The conditionally loaded jar would be nice in that the JIT would only see one version of the factory classes on the classpath and everything could be optimized just like any other jvm run jit'd code. In addition there are other places that we use reflection to do things conditionally and a conditionally loaded jar would be nice. So I'm for #2.
          Hide
          Jonathan Hsieh added a comment -

          Just a few points of data to help make the decision – hbase needs to be recompiled to run against hadoop 2.0 hdfs – the jars compiled against hadoop 1.0 doesn't work against a hadoop 2.0 hdfs. Kind of makes you wish for #ifdefs.

          Reflection only really needs to be done once – after you have figured out which version you are in, you can cache the Method, and you should be able to use the same instance from that point forward. You can see how to do it by checking how hadoop's sync/hflush are called in a region server.

          Show
          Jonathan Hsieh added a comment - Just a few points of data to help make the decision – hbase needs to be recompiled to run against hadoop 2.0 hdfs – the jars compiled against hadoop 1.0 doesn't work against a hadoop 2.0 hdfs. Kind of makes you wish for #ifdefs. Reflection only really needs to be done once – after you have figured out which version you are in, you can cache the Method, and you should be able to use the same instance from that point forward. You can see how to do it by checking how hadoop's sync/hflush are called in a region server.
          Hide
          Ted Yu added a comment -

          For option #2, looks like we need some maven expert(s) to establish modules for building shim jars based on hadoop 1.0 and hadoop 2.0
          This potentially can be a blocker.

          Show
          Ted Yu added a comment - For option #2, looks like we need some maven expert(s) to establish modules for building shim jars based on hadoop 1.0 and hadoop 2.0 This potentially can be a blocker.
          Hide
          Elliott Clark added a comment -

          hbase needs to be recompiled to run against hadoop 2.0 hdfs

          When did this happen ? In the pom all that changes for the hadoop 2.0 compile are that some dependencies are changed. I just tried on a local machine only and just changing the libs dir I was able to run either version (stand alone only so not really an exhaustive test I know)

          Show
          Elliott Clark added a comment - hbase needs to be recompiled to run against hadoop 2.0 hdfs When did this happen ? In the pom all that changes for the hadoop 2.0 compile are that some dependencies are changed. I just tried on a local machine only and just changing the libs dir I was able to run either version (stand alone only so not really an exhaustive test I know)
          Hide
          Elliott Clark added a comment -

          http://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html ServiceLoaders seem like the exact thing that we would need.

          hbase-hadoop1 – Hadoop 1 implementations including the metrics providers
          hbase-hadoop2 – Hadoop 2 implementations including the metrics providers
          hbase-server would then get one of the two above implementations

          Most things are pretty easy. The assembly action is a little bit more difficult but I think I can get that. My first question is should the interfaces for the shim classes be in hbase-common or should they go into a new module (hbase-hadoop-compat ? )

          Making a hbase-hadoop-compat module has the nice property of keeping the module dependency structure flat. (Please forgive the bad ascii art that follows)

          hbase-(hadoop1|hadoop2) ----> hbase-hadoop-compat-----\
                                                                 |
                                                                 |
          hbase-common --------------------------------------> hbase-server
          

          otherwise we would have:

          hadoop-common ------> hbase-compat -> hbase-(hadoop1|hadoop2) -----\
           |                                                                  |
           |                                                                  |
           +---------------------------------------------------------------> hbase-server
          

          Thoughts ?

          Show
          Elliott Clark added a comment - http://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html ServiceLoaders seem like the exact thing that we would need. hbase-hadoop1 – Hadoop 1 implementations including the metrics providers hbase-hadoop2 – Hadoop 2 implementations including the metrics providers hbase-server would then get one of the two above implementations Most things are pretty easy. The assembly action is a little bit more difficult but I think I can get that. My first question is should the interfaces for the shim classes be in hbase-common or should they go into a new module (hbase-hadoop-compat ? ) Making a hbase-hadoop-compat module has the nice property of keeping the module dependency structure flat. (Please forgive the bad ascii art that follows) hbase-(hadoop1|hadoop2) ----> hbase-hadoop-compat-----\ | | hbase-common --------------------------------------> hbase-server otherwise we would have: hadoop-common ------> hbase-compat -> hbase-(hadoop1|hadoop2) -----\ | | | | +---------------------------------------------------------------> hbase-server Thoughts ?
          Hide
          Ted Yu added a comment -

          Good idea.
          ServiceLoader is used widely in hadoop. e.g. from Token.java:

                for (TokenIdentifier id : ServiceLoader.load(TokenIdentifier.class)) {
                  tokenKindMap.put(id.getKind(), id.getClass());
                }
          

          I would vote for hbase-hadoop-compat module.

          Show
          Ted Yu added a comment - Good idea. ServiceLoader is used widely in hadoop. e.g. from Token.java: for (TokenIdentifier id : ServiceLoader.load(TokenIdentifier.class)) { tokenKindMap.put(id.getKind(), id.getClass()); } I would vote for hbase-hadoop-compat module.
          Hide
          Jonathan Hsieh added a comment -

          bq. hbase needs to be recompiled to run against hadoop 2.0 hdfs

          When did this happen ? In the pom all that changes for the hadoop 2.0 compile are that some dependencies are changed. I just tried on a local machine only and just changing the libs dir I was able to run either version (stand alone only so not really an exhaustive test I know)

          Here's one reason:
          https://issues.apache.org/jira/browse/HBASE-5861?focusedCommentId=13259785&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13259785
          Here's another: HDFS-1620/HDFS-2412

          Show
          Jonathan Hsieh added a comment - bq. hbase needs to be recompiled to run against hadoop 2.0 hdfs When did this happen ? In the pom all that changes for the hadoop 2.0 compile are that some dependencies are changed. I just tried on a local machine only and just changing the libs dir I was able to run either version (stand alone only so not really an exhaustive test I know) Here's one reason: https://issues.apache.org/jira/browse/HBASE-5861?focusedCommentId=13259785&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13259785 Here's another: HDFS-1620 / HDFS-2412
          Hide
          Luke Lu added a comment -

          +1 on Alex's option 2 and ServiceLoader. For HBase we only need to implement shims for metrics sources, so interface for registry and Mutable classes would suffice.

          Show
          Luke Lu added a comment - +1 on Alex's option 2 and ServiceLoader. For HBase we only need to implement shims for metrics sources, so interface for registry and Mutable classes would suffice.
          Hide
          Elliott Clark added a comment -

          @Jonathan
          Thanks.

          @Luke Lu
          Unfortunately we need to shim a little bit more since the actual MetricsBuilder has also changed. I'm trying to have something ready to show in a little bit.

          Show
          Elliott Clark added a comment - @Jonathan Thanks. @Luke Lu Unfortunately we need to shim a little bit more since the actual MetricsBuilder has also changed. I'm trying to have something ready to show in a little bit.
          Hide
          Luke Lu added a comment -

          @Elliott: MetricsBuilder/Collector is only needed for creating metrics dynamically and for implementing new Mutable metrics. Does HBase need to define new metrics at run time?

          Show
          Luke Lu added a comment - @Elliott: MetricsBuilder/Collector is only needed for creating metrics dynamically and for implementing new Mutable metrics. Does HBase need to define new metrics at run time?
          Hide
          Elliott Clark added a comment -

          @Luke
          Yes. We have per region metrics, per replication stream metrics, and per schema metrics. There might be others that I'm missing but those are the ones I have touched.
          In addition we will probably be implementing our own metrics classes. Right now we have MetricsHistogram which is based on metrics1.

          Show
          Elliott Clark added a comment - @Luke Yes. We have per region metrics, per replication stream metrics, and per schema metrics. There might be others that I'm missing but those are the ones I have touched. In addition we will probably be implementing our own metrics classes. Right now we have MetricsHistogram which is based on metrics1.
          Hide
          Elliott Clark added a comment -

          Here's a patch that add's hadoop compatibility shims. I needed something to prototype and test with so I used my implementation of HBASE-6323 as an example.

          hbase-hadoop-compat contains the factory and the interface. The factory uses ResourceFinder from the geronimo project. It's much more flexible than ServiceLoader (allows different locations easily and most importantly it allows constructor arguments). I didn't want to add the whole geronimo project as a dep so the code is copied in. I tried to give as much credit as I could. I can go back to using ServiceLoader if people object to having

          hbase-hadoop1-compat and hbase-hadoop2-compat add the actual implementation of the class who's interface is defined in hbase-hadoop-compat. I don't have a hbase-hadoop23-compat

          Right now depending upon which profile is building the hbase-server module gets one of the above as a dependency.

          In addition when building assembly files only contain the hbase-hadoop

          {1,2}

          -compat directory needed. It's possible to keep the old assembly file the way it was and change the shell scripts to only load the one. But I didn't get to that.

          I tested it in place and locally after building tar.gz's on both

          • hadoop 1.0.3
          • hadoop 2.0.0-alpha

          In place scripts still work though I'm not really sure of why or how. I need to investigate that later.

          Show
          Elliott Clark added a comment - Here's a patch that add's hadoop compatibility shims. I needed something to prototype and test with so I used my implementation of HBASE-6323 as an example. hbase-hadoop-compat contains the factory and the interface. The factory uses ResourceFinder from the geronimo project. It's much more flexible than ServiceLoader (allows different locations easily and most importantly it allows constructor arguments). I didn't want to add the whole geronimo project as a dep so the code is copied in. I tried to give as much credit as I could. I can go back to using ServiceLoader if people object to having hbase-hadoop1-compat and hbase-hadoop2-compat add the actual implementation of the class who's interface is defined in hbase-hadoop-compat. I don't have a hbase-hadoop23-compat Right now depending upon which profile is building the hbase-server module gets one of the above as a dependency. In addition when building assembly files only contain the hbase-hadoop {1,2} -compat directory needed. It's possible to keep the old assembly file the way it was and change the shell scripts to only load the one. But I didn't get to that. I tested it in place and locally after building tar.gz's on both hadoop 1.0.3 hadoop 2.0.0-alpha In place scripts still work though I'm not really sure of why or how. I need to investigate that later.
          Hide
          Alex Baranau added a comment -

          Heh, was about to commit example with ServiceLoader (and same extra modules based on your schema above), but looks like it makes sense to use ResourceFinder. Will use your patch and add metrics sources for RegionServer and Master to it (almost empty, as per discussion above) tomorrow and we can think about closing the issue (after review, etc.).

          Thank you, Elliott!

          Show
          Alex Baranau added a comment - Heh, was about to commit example with ServiceLoader (and same extra modules based on your schema above), but looks like it makes sense to use ResourceFinder. Will use your patch and add metrics sources for RegionServer and Master to it (almost empty, as per discussion above) tomorrow and we can think about closing the issue (after review, etc.). Thank you, Elliott!
          Hide
          Alex Baranau added a comment -

          "was about to commit" read as "was about to provide patch"

          Show
          Alex Baranau added a comment - "was about to commit" read as "was about to provide patch"
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12536162/HBASE-4050-0.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 7 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 11 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.replication.TestReplication
          org.apache.hadoop.hbase.replication.TestMultiSlaveReplication
          org.apache.hadoop.hbase.replication.TestMasterReplication

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12536162/HBASE-4050-0.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 7 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 11 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.replication.TestReplication org.apache.hadoop.hbase.replication.TestMultiSlaveReplication org.apache.hadoop.hbase.replication.TestMasterReplication Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2365//console This message is automatically generated.
          Hide
          Elliott Clark added a comment -

          I missed one part in ReplicationSourceMetrics. This should fix the tests.

          Show
          Elliott Clark added a comment - I missed one part in ReplicationSourceMetrics. This should fix the tests.
          Hide
          Ted Yu added a comment -

          Amazing work !

          ReplicationMetricsSource javadoc is to be filled.
          And some catch clauses have boilerplate code:

          +      } catch (IOException e) {
          +        e.printStackTrace();  //To change body of catch statement use File | Settings | File Templates.
          

          I thought author name shouldn't appear in the file header:

          + * author David Blevins
          + * version $Rev$ $Date$
          

          Consider using uppercase M in the string below:

          +  private static final String METRICS_CONTEXT = "replicationmetrics";
          

          Do we need to check that delta is non-negative ?

          +    gaugeInt.decr(delta);
          

          Maybe give the assembly file a more descriptive name ?

          +        <assembly.file>src/assembly/two.xml</assembly.file>
          

          hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceMetrics.java is removed.
          hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics2/ReplicationSourceMetrics.java is added.
          What about metrics1 ?

          Show
          Ted Yu added a comment - Amazing work ! ReplicationMetricsSource javadoc is to be filled. And some catch clauses have boilerplate code: + } catch (IOException e) { + e.printStackTrace(); //To change body of catch statement use File | Settings | File Templates. I thought author name shouldn't appear in the file header: + * author David Blevins + * version $Rev$ $Date$ Consider using uppercase M in the string below: + private static final String METRICS_CONTEXT = "replicationmetrics" ; Do we need to check that delta is non-negative ? + gaugeInt.decr(delta); Maybe give the assembly file a more descriptive name ? + <assembly.file>src/assembly/two.xml</assembly.file> hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceMetrics.java is removed. hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics2/ReplicationSourceMetrics.java is added. What about metrics1 ?
          Hide
          Luke Lu added a comment -

          ResourceFinder seems to have a few nice properties but do we actually require them?, it's > 1KLOC, 1/3 of the whole patch. According to a comment in http://goo.gl/LmsXp it doesn't support comments etc in service definitions and that it doesn't offer perceivable performance improvement over ServiceLoader. Also, why did you pick this particular implementation of ResourceFinder from xbeans-3.7 (the current release is 3.11.1)?

          Show
          Luke Lu added a comment - ResourceFinder seems to have a few nice properties but do we actually require them?, it's > 1KLOC, 1/3 of the whole patch. According to a comment in http://goo.gl/LmsXp it doesn't support comments etc in service definitions and that it doesn't offer perceivable performance improvement over ServiceLoader. Also, why did you pick this particular implementation of ResourceFinder from xbeans-3.7 (the current release is 3.11.1)?
          Hide
          Elliott Clark added a comment -

          ReplicationMetricsSource javadoc is to be filled.

          And some catch clauses have boilerplate code

          Agreed. I'll get a patch up soon.

          I thought author name shouldn't appear in the file header:

          I was trying to keep the source as close to the original as possible. I'm open for whatever; I was just trying to make sure that the people who wrote it got credit.

          Consider using uppercase M in the string below

          Hadoop's metrics2 uses all lowercase for context. https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java#L68

          Do we need to check that delta is non-negative ?

          Nope. Hadoop doesn't so I followed suit.

          Maybe give the assembly file a more descriptive name ?

          Sure

          Show
          Elliott Clark added a comment - ReplicationMetricsSource javadoc is to be filled. And some catch clauses have boilerplate code Agreed. I'll get a patch up soon. I thought author name shouldn't appear in the file header: I was trying to keep the source as close to the original as possible. I'm open for whatever; I was just trying to make sure that the people who wrote it got credit. Consider using uppercase M in the string below Hadoop's metrics2 uses all lowercase for context. https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java#L68 Do we need to check that delta is non-negative ? Nope. Hadoop doesn't so I followed suit. Maybe give the assembly file a more descriptive name ? Sure
          Hide
          Elliott Clark added a comment -

          ResourceFinder seems to have a few nice properties but do we actually require them?

          Not 100%, we can get around ServiceLoader's short comings by having the ServiceLoader create factories that take in arguments and pass them to a constructor, however it would be cleaner .

          why did you pick this particular implementation of ResourceFinder from xbeans-3.7

          A friend sent me to that exact link, saying they were using it. I can pull a newer one.

          Show
          Elliott Clark added a comment - ResourceFinder seems to have a few nice properties but do we actually require them? Not 100%, we can get around ServiceLoader's short comings by having the ServiceLoader create factories that take in arguments and pass them to a constructor, however it would be cleaner . why did you pick this particular implementation of ResourceFinder from xbeans-3.7 A friend sent me to that exact link, saying they were using it. I can pull a newer one.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12536167/HBASE-4050-1.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 7 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 11 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12536167/HBASE-4050-1.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 7 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 11 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2366//console This message is automatically generated.
          Hide
          Elliott Clark added a comment -

          After code comments.

          Show
          Elliott Clark added a comment - After code comments.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12536174/HBASE-4050-2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 7 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 12 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12536174/HBASE-4050-2.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 7 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 12 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2367//console This message is automatically generated.
          Hide
          Lars George added a comment -

          Just a question, do we still need to use "hadoop-metrics.properties" (with out without the "2") or could we now have the proper hbase-metrics.properties name?

          Show
          Lars George added a comment - Just a question, do we still need to use "hadoop-metrics.properties" (with out without the "2") or could we now have the proper hbase-metrics.properties name?
          Hide
          stack added a comment -

          Double license in new pom.

          Is it right hardcoding the version in the sub module: "+ <version>0.95-SNAPSHOT</version>"

          ditto in other new poms.

          Can we keep stuff in metrics rather than metrics and metrics2 now we have this fancy pants compatibility module?

          Did you say you wanted ResourceFinder because you wanted to pass stuff on construction? Where are you using constructors here?

          Can you do tests for you implementations (and have dependencies in these new modules against the said hadoop versions?)

          This looks great Elliott. I'd be w/ Luke trying to make do with ServiceFinder if possible rather than pull in ResourceFinder (Its in hbase-hadoop-compat because hbase-common is not a dependency going by your ascii art above?)

          Show
          stack added a comment - Double license in new pom. Is it right hardcoding the version in the sub module: "+ <version>0.95-SNAPSHOT</version>" ditto in other new poms. Can we keep stuff in metrics rather than metrics and metrics2 now we have this fancy pants compatibility module? Did you say you wanted ResourceFinder because you wanted to pass stuff on construction? Where are you using constructors here? Can you do tests for you implementations (and have dependencies in these new modules against the said hadoop versions?) This looks great Elliott. I'd be w/ Luke trying to make do with ServiceFinder if possible rather than pull in ResourceFinder (Its in hbase-hadoop-compat because hbase-common is not a dependency going by your ascii art above?)
          Hide
          Elliott Clark added a comment -

          Just a question, do we still need to use "hadoop-metrics.properties" (with out without the "2") or could we now have the proper hbase-metrics.properties name?

          We can do: hadoop-metrics2-hbase.properties

          Double license in new pom.

          Fixed in next patch

          Is it right hardcoding the version in the sub module: "+ <version>0.95-SNAPSHOT</version>"

          Yeah I think so. That's how common has it. removing it or using a variable doesn't work. I think it has something to do with how maven bootstraps module dependencies. It's lame; I'd love for a maven expert to find a better way.

          Can you do tests for you implementations (and have dependencies in these new modules against the said hadoop versions?)

          Sure added in the new versions. The modules already have dependencies on the version of hadoop that they would shim, so I added some tests.

          Did you say you wanted ResourceFinder because you wanted to pass stuff on construction? Where are you using constructors here?

          I'd be w/ Luke trying to make do with ServiceFinder if possible rather than pull in ResourceFinder

          I'm not using it yet. However the discussion above basically wanted a very empty proof of concept. So that's what I created. The place that I see using constructor arguments would be when I implement Histograms.

          With all that said we can still just use factories, or guice like is being discussed on the mailing list so I'm going back to ServiceLoader in the new patch.

          Show
          Elliott Clark added a comment - Just a question, do we still need to use "hadoop-metrics.properties" (with out without the "2") or could we now have the proper hbase-metrics.properties name? We can do: hadoop-metrics2-hbase.properties Double license in new pom. Fixed in next patch Is it right hardcoding the version in the sub module: "+ <version>0.95-SNAPSHOT</version>" Yeah I think so. That's how common has it. removing it or using a variable doesn't work. I think it has something to do with how maven bootstraps module dependencies. It's lame; I'd love for a maven expert to find a better way. Can you do tests for you implementations (and have dependencies in these new modules against the said hadoop versions?) Sure added in the new versions. The modules already have dependencies on the version of hadoop that they would shim, so I added some tests. Did you say you wanted ResourceFinder because you wanted to pass stuff on construction? Where are you using constructors here? I'd be w/ Luke trying to make do with ServiceFinder if possible rather than pull in ResourceFinder I'm not using it yet. However the discussion above basically wanted a very empty proof of concept. So that's what I created. The place that I see using constructor arguments would be when I implement Histograms. With all that said we can still just use factories, or guice like is being discussed on the mailing list so I'm going back to ServiceLoader in the new patch.
          Hide
          Elliott Clark added a comment -

          Moved the main bulk of the implementation into a base classes.

          created a factory for metrics in hbase-hadoop2-compat as MetricsRegistry does not allow metrics to be removed.

          Added tests for implementations.

          Removed the ResourceFinder.

          Removed the double license.

          Show
          Elliott Clark added a comment - Moved the main bulk of the implementation into a base classes. created a factory for metrics in hbase-hadoop2-compat as MetricsRegistry does not allow metrics to be removed. Added tests for implementations. Removed the ResourceFinder. Removed the double license.
          Hide
          Ted Yu added a comment -

          Putting patch on review board would help.
          https://reviews.apache.org/r/new/ gave me Error 500 ...

          Show
          Ted Yu added a comment - Putting patch on review board would help. https://reviews.apache.org/r/new/ gave me Error 500 ...
          Hide
          Ted Yu added a comment -

          Latest patch didn't compile against hadoop 2.0
          See https://builds.apache.org/job/PreCommit-HBASE-Build/2372/console

          Show
          Ted Yu added a comment - Latest patch didn't compile against hadoop 2.0 See https://builds.apache.org/job/PreCommit-HBASE-Build/2372/console
          Hide
          Elliott Clark added a comment -

          I put it up on review board. https://reviews.apache.org/r/5921/

          @Ted
          Both of these worked for me:

          • mvn clean install assembly:assembly -Dhadoop.profile=2.0 -DskipTests
            • shasum: 1fbb2cfbe2a506de29e7343d255a728e6bc57cd0 target/hbase-0.95-SNAPSHOT.tar.gz
          • mvn clean install assembly:assembly -DskipTests
            • shasum: b20010f969f0d8f68a9fb08f824680a6345ea05a target/hbase-0.95-SNAPSHOT.tar.gz

          All tests in:

          • hbase-hadoop-compat
          • hbase-hadoop1-compat
          • hbase-hadoop2-compat
            are passing: cd hbase-hadoop-compat && mvn clean test && cd ../hbase-hadoop1-compat && mvn clean test && cd ../hbase-hadoop2-compat && mvn clean test
          ~/Code/hbase HBASE-4050 ✔ ➤ cd hbase-hadoop-compat && mvn clean test && cd ../hbase-hadoop1-compat && mvn clean test && cd ../hbase-hadoop2-compat && mvn clean test                                            ruby-1.9.3-p194  [0s] 
          [INFO] Scanning for projects...
          [INFO]                                                                         
          [INFO] ------------------------------------------------------------------------
          [INFO] Building HBase - Hadoop Compatibility 0.95-SNAPSHOT
          [INFO] ------------------------------------------------------------------------
          [INFO] 
          [INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hbase-hadoop-compat ---
          [INFO] Deleting /Users/eclark/Code/hbase/hbase-hadoop-compat/target
          [INFO] 
          [INFO] --- maven-remote-resources-plugin:1.1:process (default) @ hbase-hadoop-compat ---
          [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus.velocity.ContextClassLoaderResourceLoader'.
          [INFO] Setting property: velocimacro.messages.on => 'false'.
          [INFO] Setting property: resource.loader => 'classpath'.
          [INFO] Setting property: resource.manager.logwhenfound => 'false'.
          [INFO] 
          [INFO] --- maven-resources-plugin:2.5:resources (default-resources) @ hbase-hadoop-compat ---
          [debug] execute contextualize
          [INFO] Using 'UTF-8' encoding to copy filtered resources.
          [INFO] skip non existing resourceDirectory /Users/eclark/Code/hbase/hbase-hadoop-compat/src/main/resources
          [INFO] Copying 3 resources
          [INFO] 
          [INFO] --- maven-compiler-plugin:2.0.2:compile (default-compile) @ hbase-hadoop-compat ---
          [INFO] Compiling 3 source files to /Users/eclark/Code/hbase/hbase-hadoop-compat/target/classes
          [INFO] 
          [INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ hbase-hadoop-compat ---
          [debug] execute contextualize
          [INFO] Using 'UTF-8' encoding to copy filtered resources.
          [INFO] skip non existing resourceDirectory /Users/eclark/Code/hbase/hbase-hadoop-compat/src/test/resources
          [INFO] Copying 3 resources
          [INFO] 
          [INFO] --- maven-compiler-plugin:2.0.2:testCompile (default-testCompile) @ hbase-hadoop-compat ---
          [INFO] Compiling 1 source file to /Users/eclark/Code/hbase/hbase-hadoop-compat/target/test-classes
          [INFO] 
          [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test (default-test) @ hbase-hadoop-compat ---
          [INFO] Surefire report directory: /Users/eclark/Code/hbase/hbase-hadoop-compat/target/surefire-reports
          [INFO] Using configured provider org.apache.maven.surefire.junitcore.JUnitCoreProvider
          
          -------------------------------------------------------
           T E S T S
          -------------------------------------------------------
          Running org.apache.hadoop.hbase.replication.regionserver.metrics.ReplicationMetricsSourceFactoryTest
          Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.021 sec
          
          Results :
          
          Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
          
          [INFO] 
          [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test (secondPartTestsExecution) @ hbase-hadoop-compat ---
          [INFO] Tests are skipped.
          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD SUCCESS
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 2.352s
          [INFO] Finished at: Thu Jul 12 16:36:36 PDT 2012
          [INFO] Final Memory: 12M/81M
          [INFO] ------------------------------------------------------------------------
          [INFO] Scanning for projects...
          [INFO]                                                                         
          [INFO] ------------------------------------------------------------------------
          [INFO] Building HBase - Hadoop One Compatibility 0.95-SNAPSHOT
          [INFO] ------------------------------------------------------------------------
          [INFO] 
          [INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hbase-hadoop1-compat ---
          [INFO] Deleting /Users/eclark/Code/hbase/hbase-hadoop1-compat/target
          [INFO] 
          [INFO] --- maven-remote-resources-plugin:1.1:process (default) @ hbase-hadoop1-compat ---
          [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus.velocity.ContextClassLoaderResourceLoader'.
          [INFO] Setting property: velocimacro.messages.on => 'false'.
          [INFO] Setting property: resource.loader => 'classpath'.
          [INFO] Setting property: resource.manager.logwhenfound => 'false'.
          [INFO] 
          [INFO] --- maven-resources-plugin:2.5:resources (default-resources) @ hbase-hadoop1-compat ---
          [debug] execute contextualize
          [INFO] Using 'UTF-8' encoding to copy filtered resources.
          [INFO] Copying 1 resource
          [INFO] Copying 3 resources
          [INFO] 
          [INFO] --- maven-compiler-plugin:2.0.2:compile (default-compile) @ hbase-hadoop1-compat ---
          [INFO] Compiling 2 source files to /Users/eclark/Code/hbase/hbase-hadoop1-compat/target/classes
          [INFO] 
          [INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ hbase-hadoop1-compat ---
          [debug] execute contextualize
          [INFO] Using 'UTF-8' encoding to copy filtered resources.
          [INFO] skip non existing resourceDirectory /Users/eclark/Code/hbase/hbase-hadoop1-compat/src/test/resources
          [INFO] Copying 3 resources
          [INFO] 
          [INFO] --- maven-compiler-plugin:2.0.2:testCompile (default-testCompile) @ hbase-hadoop1-compat ---
          [INFO] Compiling 2 source files to /Users/eclark/Code/hbase/hbase-hadoop1-compat/target/test-classes
          [INFO] 
          [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test (default-test) @ hbase-hadoop1-compat ---
          [INFO] Surefire report directory: /Users/eclark/Code/hbase/hbase-hadoop1-compat/target/surefire-reports
          [INFO] Using configured provider org.apache.maven.surefire.junitcore.JUnitCoreProvider
          
          -------------------------------------------------------
           T E S T S
          -------------------------------------------------------
          Running org.apache.hadoop.hbase.metrics.BaseMetricsSourceImplTest
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
          Running org.apache.hadoop.hbase.replication.regionserver.metrics.ReplicationMetricsSourceImplTest
          Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.132 sec
          
          Results :
          
          Tests run: 7, Failures: 0, Errors: 0, Skipped: 0
          
          [INFO] 
          [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test (secondPartTestsExecution) @ hbase-hadoop1-compat ---
          [INFO] Tests are skipped.
          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD SUCCESS
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 2.926s
          [INFO] Finished at: Thu Jul 12 16:36:40 PDT 2012
          [INFO] Final Memory: 15M/81M
          [INFO] ------------------------------------------------------------------------
          [INFO] Scanning for projects...
          [INFO]                                                                         
          [INFO] ------------------------------------------------------------------------
          [INFO] Building HBase - Hadoop Two Compatibility 0.95-SNAPSHOT
          [INFO] ------------------------------------------------------------------------
          [INFO] 
          [INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hbase-hadoop2-compat ---
          [INFO] Deleting /Users/eclark/Code/hbase/hbase-hadoop2-compat/target
          [INFO] 
          [INFO] --- maven-remote-resources-plugin:1.1:process (default) @ hbase-hadoop2-compat ---
          [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus.velocity.ContextClassLoaderResourceLoader'.
          [INFO] Setting property: velocimacro.messages.on => 'false'.
          [INFO] Setting property: resource.loader => 'classpath'.
          [INFO] Setting property: resource.manager.logwhenfound => 'false'.
          [INFO] 
          [INFO] --- maven-resources-plugin:2.5:resources (default-resources) @ hbase-hadoop2-compat ---
          [debug] execute contextualize
          [INFO] Using 'UTF-8' encoding to copy filtered resources.
          [INFO] Copying 1 resource
          [INFO] Copying 3 resources
          [INFO] 
          [INFO] --- maven-compiler-plugin:2.0.2:compile (default-compile) @ hbase-hadoop2-compat ---
          [INFO] Compiling 3 source files to /Users/eclark/Code/hbase/hbase-hadoop2-compat/target/classes
          [INFO] 
          [INFO] --- maven-dependency-plugin:2.1:build-classpath (create-mrapp-generated-classpath) @ hbase-hadoop2-compat ---
          [INFO] Wrote classpath file '/Users/eclark/Code/hbase/hbase-hadoop2-compat/target/test-classes/mrapp-generated-classpath'.
          [INFO] 
          [INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ hbase-hadoop2-compat ---
          [debug] execute contextualize
          [INFO] Using 'UTF-8' encoding to copy filtered resources.
          [INFO] skip non existing resourceDirectory /Users/eclark/Code/hbase/hbase-hadoop2-compat/src/test/resources
          [INFO] Copying 3 resources
          [INFO] 
          [INFO] --- maven-compiler-plugin:2.0.2:testCompile (default-testCompile) @ hbase-hadoop2-compat ---
          [INFO] Compiling 2 source files to /Users/eclark/Code/hbase/hbase-hadoop2-compat/target/test-classes
          [INFO] 
          [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test (default-test) @ hbase-hadoop2-compat ---
          [INFO] Surefire report directory: /Users/eclark/Code/hbase/hbase-hadoop2-compat/target/surefire-reports
          [INFO] Using configured provider org.apache.maven.surefire.junitcore.JUnitCoreProvider
          
          -------------------------------------------------------
           T E S T S
          -------------------------------------------------------
          Running org.apache.hadoop.hbase.metrics.BaseMetricsSourceImplTest
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec
          Running org.apache.hadoop.hbase.replication.regionserver.metrics.ReplicationMetricsSourceImplTest
          Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.218 sec
          
          Results :
          
          Tests run: 7, Failures: 0, Errors: 0, Skipped: 0
          
          [INFO] 
          [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test (secondPartTestsExecution) @ hbase-hadoop2-compat ---
          [INFO] Tests are skipped.
          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD SUCCESS
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 4.936s
          [INFO] Finished at: Thu Jul 12 16:36:46 PDT 2012
          [INFO] Final Memory: 21M/81M
          [INFO] ------------------------------------------------------------------------
          
          Show
          Elliott Clark added a comment - I put it up on review board. https://reviews.apache.org/r/5921/ @Ted Both of these worked for me: mvn clean install assembly:assembly -Dhadoop.profile=2.0 -DskipTests shasum: 1fbb2cfbe2a506de29e7343d255a728e6bc57cd0 target/hbase-0.95-SNAPSHOT.tar.gz mvn clean install assembly:assembly -DskipTests shasum: b20010f969f0d8f68a9fb08f824680a6345ea05a target/hbase-0.95-SNAPSHOT.tar.gz All tests in: hbase-hadoop-compat hbase-hadoop1-compat hbase-hadoop2-compat are passing: cd hbase-hadoop-compat && mvn clean test && cd ../hbase-hadoop1-compat && mvn clean test && cd ../hbase-hadoop2-compat && mvn clean test ~/Code/hbase HBASE-4050 ✔ ➤ cd hbase-hadoop-compat && mvn clean test && cd ../hbase-hadoop1-compat && mvn clean test && cd ../hbase-hadoop2-compat && mvn clean test ruby-1.9.3-p194 [0s] [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building HBase - Hadoop Compatibility 0.95-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-clean-plugin:2.4.1:clean ( default -clean) @ hbase-hadoop-compat --- [INFO] Deleting /Users/eclark/Code/hbase/hbase-hadoop-compat/target [INFO] [INFO] --- maven-remote-resources-plugin:1.1:process ( default ) @ hbase-hadoop-compat --- [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus.velocity.ContextClassLoaderResourceLoader'. [INFO] Setting property: velocimacro.messages.on => ' false '. [INFO] Setting property: resource.loader => 'classpath'. [INFO] Setting property: resource.manager.logwhenfound => ' false '. [INFO] [INFO] --- maven-resources-plugin:2.5:resources ( default -resources) @ hbase-hadoop-compat --- [debug] execute contextualize [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /Users/eclark/Code/hbase/hbase-hadoop-compat/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-compiler-plugin:2.0.2:compile ( default -compile) @ hbase-hadoop-compat --- [INFO] Compiling 3 source files to /Users/eclark/Code/hbase/hbase-hadoop-compat/target/classes [INFO] [INFO] --- maven-resources-plugin:2.5:testResources ( default -testResources) @ hbase-hadoop-compat --- [debug] execute contextualize [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /Users/eclark/Code/hbase/hbase-hadoop-compat/src/test/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-compiler-plugin:2.0.2:testCompile ( default -testCompile) @ hbase-hadoop-compat --- [INFO] Compiling 1 source file to /Users/eclark/Code/hbase/hbase-hadoop-compat/target/test-classes [INFO] [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test ( default -test) @ hbase-hadoop-compat --- [INFO] Surefire report directory: /Users/eclark/Code/hbase/hbase-hadoop-compat/target/surefire-reports [INFO] Using configured provider org.apache.maven.surefire.junitcore.JUnitCoreProvider ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.hadoop.hbase.replication.regionserver.metrics.ReplicationMetricsSourceFactoryTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.021 sec Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test (secondPartTestsExecution) @ hbase-hadoop-compat --- [INFO] Tests are skipped. [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.352s [INFO] Finished at: Thu Jul 12 16:36:36 PDT 2012 [INFO] Final Memory: 12M/81M [INFO] ------------------------------------------------------------------------ [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building HBase - Hadoop One Compatibility 0.95-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-clean-plugin:2.4.1:clean ( default -clean) @ hbase-hadoop1-compat --- [INFO] Deleting /Users/eclark/Code/hbase/hbase-hadoop1-compat/target [INFO] [INFO] --- maven-remote-resources-plugin:1.1:process ( default ) @ hbase-hadoop1-compat --- [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus.velocity.ContextClassLoaderResourceLoader'. [INFO] Setting property: velocimacro.messages.on => ' false '. [INFO] Setting property: resource.loader => 'classpath'. [INFO] Setting property: resource.manager.logwhenfound => ' false '. [INFO] [INFO] --- maven-resources-plugin:2.5:resources ( default -resources) @ hbase-hadoop1-compat --- [debug] execute contextualize [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 1 resource [INFO] Copying 3 resources [INFO] [INFO] --- maven-compiler-plugin:2.0.2:compile ( default -compile) @ hbase-hadoop1-compat --- [INFO] Compiling 2 source files to /Users/eclark/Code/hbase/hbase-hadoop1-compat/target/classes [INFO] [INFO] --- maven-resources-plugin:2.5:testResources ( default -testResources) @ hbase-hadoop1-compat --- [debug] execute contextualize [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /Users/eclark/Code/hbase/hbase-hadoop1-compat/src/test/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-compiler-plugin:2.0.2:testCompile ( default -testCompile) @ hbase-hadoop1-compat --- [INFO] Compiling 2 source files to /Users/eclark/Code/hbase/hbase-hadoop1-compat/target/test-classes [INFO] [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test ( default -test) @ hbase-hadoop1-compat --- [INFO] Surefire report directory: /Users/eclark/Code/hbase/hbase-hadoop1-compat/target/surefire-reports [INFO] Using configured provider org.apache.maven.surefire.junitcore.JUnitCoreProvider ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.hadoop.hbase.metrics.BaseMetricsSourceImplTest Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec Running org.apache.hadoop.hbase.replication.regionserver.metrics.ReplicationMetricsSourceImplTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.132 sec Results : Tests run: 7, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test (secondPartTestsExecution) @ hbase-hadoop1-compat --- [INFO] Tests are skipped. [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.926s [INFO] Finished at: Thu Jul 12 16:36:40 PDT 2012 [INFO] Final Memory: 15M/81M [INFO] ------------------------------------------------------------------------ [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building HBase - Hadoop Two Compatibility 0.95-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-clean-plugin:2.4.1:clean ( default -clean) @ hbase-hadoop2-compat --- [INFO] Deleting /Users/eclark/Code/hbase/hbase-hadoop2-compat/target [INFO] [INFO] --- maven-remote-resources-plugin:1.1:process ( default ) @ hbase-hadoop2-compat --- [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus.velocity.ContextClassLoaderResourceLoader'. [INFO] Setting property: velocimacro.messages.on => ' false '. [INFO] Setting property: resource.loader => 'classpath'. [INFO] Setting property: resource.manager.logwhenfound => ' false '. [INFO] [INFO] --- maven-resources-plugin:2.5:resources ( default -resources) @ hbase-hadoop2-compat --- [debug] execute contextualize [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 1 resource [INFO] Copying 3 resources [INFO] [INFO] --- maven-compiler-plugin:2.0.2:compile ( default -compile) @ hbase-hadoop2-compat --- [INFO] Compiling 3 source files to /Users/eclark/Code/hbase/hbase-hadoop2-compat/target/classes [INFO] [INFO] --- maven-dependency-plugin:2.1:build-classpath (create-mrapp-generated-classpath) @ hbase-hadoop2-compat --- [INFO] Wrote classpath file '/Users/eclark/Code/hbase/hbase-hadoop2-compat/target/test-classes/mrapp-generated-classpath'. [INFO] [INFO] --- maven-resources-plugin:2.5:testResources ( default -testResources) @ hbase-hadoop2-compat --- [debug] execute contextualize [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /Users/eclark/Code/hbase/hbase-hadoop2-compat/src/test/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-compiler-plugin:2.0.2:testCompile ( default -testCompile) @ hbase-hadoop2-compat --- [INFO] Compiling 2 source files to /Users/eclark/Code/hbase/hbase-hadoop2-compat/target/test-classes [INFO] [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test ( default -test) @ hbase-hadoop2-compat --- [INFO] Surefire report directory: /Users/eclark/Code/hbase/hbase-hadoop2-compat/target/surefire-reports [INFO] Using configured provider org.apache.maven.surefire.junitcore.JUnitCoreProvider ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.hadoop.hbase.metrics.BaseMetricsSourceImplTest Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec Running org.apache.hadoop.hbase.replication.regionserver.metrics.ReplicationMetricsSourceImplTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.218 sec Results : Tests run: 7, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] --- maven-surefire-plugin:2.12-TRUNK-HBASE-2:test (secondPartTestsExecution) @ hbase-hadoop2-compat --- [INFO] Tests are skipped. [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 4.936s [INFO] Finished at: Thu Jul 12 16:36:46 PDT 2012 [INFO] Final Memory: 21M/81M [INFO] ------------------------------------------------------------------------
          Hide
          Ted Yu added a comment -
          +public interface BaseMetricsSource {
          

          I suggest adding the following for above interface:

          @InterfaceAudience.Public
          @InterfaceStability.Evolving
          
          +   * Subtract some amount to a gauge.
          

          'to a' -> 'from a'

          +        rms = ServiceLoader.load(ReplicationMetricsSource.class).iterator().next();
          

          Shall we traverse the iterator and warn user if there are more than one implementation found ?

          Will continue on the review board.

          Show
          Ted Yu added a comment - + public interface BaseMetricsSource { I suggest adding the following for above interface: @InterfaceAudience.Public @InterfaceStability.Evolving + * Subtract some amount to a gauge. 'to a' -> 'from a' + rms = ServiceLoader.load(ReplicationMetricsSource.class).iterator().next(); Shall we traverse the iterator and warn user if there are more than one implementation found ? Will continue on the review board.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12536301/HBASE-4050-5.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 11 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 12 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.coprocessor.TestClassLoading
          org.apache.hadoop.hbase.regionserver.TestServerCustomProtocol

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12536301/HBASE-4050-5.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 11 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 12 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.coprocessor.TestClassLoading org.apache.hadoop.hbase.regionserver.TestServerCustomProtocol Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2375//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12536329/HBASE-4050-7.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 11 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 10 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12536329/HBASE-4050-7.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 11 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 10 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2380//console This message is automatically generated.
          Hide
          stack added a comment -

          I suppose you don't need to set test size annotation on below because annotations are not a dependency when this is built:

          +public class ReplicationMetricsSourceFactoryTest {
          

          Does BaseMetricsSource not implement MetricsSource?

          +public class BaseMetricsSourceImpl implements BaseMetricsSource, MetricsSource {
          

          These need to be this accessible:

          +  public ConcurrentMap<String, MetricMutableGaugeLong>
          +      gauges = new ConcurrentHashMap<String, MetricMutableGaugeLong>();
          +  public ConcurrentMap<String, MetricMutableCounterLong> counters =
          +      new ConcurrentHashMap<String, MetricMutableCounterLong>();
          +
          +  protected String metricsContext;
          +  protected String metricsName;
          +  protected String metricsDescription;
          

          (I see above twice)

          The stuff below where we have a static boolean and in constructor we test something already created could be a PITA in minihbase setups? Does it have to be static? Aren't we slinging singletons here anyways? (The singletons are ok in minihbasecontext too?):

          +    if (!hasInited) {
          +      //Not too worried about mutli-threaded here as all it does is spam the logs.
          +      hasInited = true;
          +      DefaultMetricsSystem.initialize(HBASE_METRICS_SYSTEM_NAME);
          +    }
          

          'hasInited' is name of a method that tests 'inited' variable... suggest changing its name.

          What about that jmx mess registering metrics in tests? The exception saying metrics already registered because we have more than one daemon in the one jvm. We still have that issue here?

          You wanted to complete this: "+/** BaseClass for */"

          Another class has no class comments though has the comment delimiters.

          Do we have to have metrics2 package? Can this new stuff be in the metrics package?

          I thought I saw a patch where you'd renamed the properties file to what LarsG suggested?

          You seem to have made it so we do not need to have a metrics2 in hbase... thats great... but in the properties file I see:

          +# See package.html for org.apache.hadoop.metrics2 for details
          +
          +*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
          

          Is that just old stuff?

          Good stuff Elliott. I'd be up for committing this and then doing other stuff in other issues.

          Show
          stack added a comment - I suppose you don't need to set test size annotation on below because annotations are not a dependency when this is built: + public class ReplicationMetricsSourceFactoryTest { Does BaseMetricsSource not implement MetricsSource? + public class BaseMetricsSourceImpl implements BaseMetricsSource, MetricsSource { These need to be this accessible: + public ConcurrentMap< String , MetricMutableGaugeLong> + gauges = new ConcurrentHashMap< String , MetricMutableGaugeLong>(); + public ConcurrentMap< String , MetricMutableCounterLong> counters = + new ConcurrentHashMap< String , MetricMutableCounterLong>(); + + protected String metricsContext; + protected String metricsName; + protected String metricsDescription; (I see above twice) The stuff below where we have a static boolean and in constructor we test something already created could be a PITA in minihbase setups? Does it have to be static? Aren't we slinging singletons here anyways? (The singletons are ok in minihbasecontext too?): + if (!hasInited) { + //Not too worried about mutli-threaded here as all it does is spam the logs. + hasInited = true ; + DefaultMetricsSystem.initialize(HBASE_METRICS_SYSTEM_NAME); + } 'hasInited' is name of a method that tests 'inited' variable... suggest changing its name. What about that jmx mess registering metrics in tests? The exception saying metrics already registered because we have more than one daemon in the one jvm. We still have that issue here? You wanted to complete this: "+/** BaseClass for */" Another class has no class comments though has the comment delimiters. Do we have to have metrics2 package? Can this new stuff be in the metrics package? I thought I saw a patch where you'd renamed the properties file to what LarsG suggested? You seem to have made it so we do not need to have a metrics2 in hbase... thats great... but in the properties file I see: +# See package .html for org.apache.hadoop.metrics2 for details + +*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink Is that just old stuff? Good stuff Elliott. I'd be up for committing this and then doing other stuff in other issues.
          Hide
          Elliott Clark added a comment -

          I suppose you don't need to set test size annotation on below because annotations are not a dependency when this is built:

          Correct. The hbase-hadoop-compat module has no hadoop dependency. In addition hbase-hadoop1-compat and hbase-hadoop2-compat currently only have unit tests, so they have the second test pass completely turned off.

          Does BaseMetricsSource not implement MetricsSource?

          It does. I guess it's just a little too explicit. I'll fix it in the patch first thing tomorrow morning.

          These need to be this accessible:

          Kind of but not 100%; I'm open to either way. In hadoop 1 metrics are pretty hard to test. Opening the maps up will make testing any classes that extend MetricsBaseSourceImpl easier. Those classes that add functionality will need those maps to be public for testing. However with that said this patch doesn't have those classes in it, so if you prefer I could make them protected and change that when needed.

          The stuff below where we have a static boolean and in constructor we test something already created could be a PITA in minihbase setups? Does it have to be static? Aren't we slinging singletons here anyways? (The singletons are ok in minihbasecontext too?):

          We are currently slinging a singleton. However when we add in more than just replication metrics we'll have more than one BaseMetricsSourceImpl. The DefaultMetricsSystem.initialize call can be done multiple times as long as it's inited with the same string, however it complains quite loudly in logs.

          'hasInited' is name of a method that tests 'inited' variable... suggest changing its name.

          Sure. Something like defaultMetricsInited

          What about that jmx mess registering metrics in tests? The exception saying metrics already registered because we have more than one daemon in the one jvm. We still have that issue here?

          We'll still have that. A little bit less spam but not completely gone. Basically when all metrics are moved to metrics2 we'll see 4 or 5 log messages (one per dupe of ReplicationMeticsSource et al.) rather than the massive ammount we see now.
          Maybe on test we should silience the junit messages from those classes ? Probably a good issue to file for the metrics clean up.

          Do we have to have metrics2 package? Can this new stuff be in the metrics package?

          Nope. Earlier you were asking to remove it. So everything is in the metrics namespace. That should make things a little nicer if we go the DI route, that's being discussed on the mailing list, and someone wants to go back to the old hadoop metrics.

          I thought I saw a patch where you'd renamed the properties file to what LarsG suggested?

          Nope just replied that we could. That file needs some examples and other love (ganglia examples and examples for regionserver/rest). Seems like a good issue for me to file after this.

          I'll clean up the two javadocs tomorrow morning.

          Show
          Elliott Clark added a comment - I suppose you don't need to set test size annotation on below because annotations are not a dependency when this is built: Correct. The hbase-hadoop-compat module has no hadoop dependency. In addition hbase-hadoop1-compat and hbase-hadoop2-compat currently only have unit tests, so they have the second test pass completely turned off. Does BaseMetricsSource not implement MetricsSource? It does. I guess it's just a little too explicit. I'll fix it in the patch first thing tomorrow morning. These need to be this accessible: Kind of but not 100%; I'm open to either way. In hadoop 1 metrics are pretty hard to test. Opening the maps up will make testing any classes that extend MetricsBaseSourceImpl easier. Those classes that add functionality will need those maps to be public for testing. However with that said this patch doesn't have those classes in it, so if you prefer I could make them protected and change that when needed. The stuff below where we have a static boolean and in constructor we test something already created could be a PITA in minihbase setups? Does it have to be static? Aren't we slinging singletons here anyways? (The singletons are ok in minihbasecontext too?): We are currently slinging a singleton. However when we add in more than just replication metrics we'll have more than one BaseMetricsSourceImpl. The DefaultMetricsSystem.initialize call can be done multiple times as long as it's inited with the same string, however it complains quite loudly in logs. 'hasInited' is name of a method that tests 'inited' variable... suggest changing its name. Sure. Something like defaultMetricsInited What about that jmx mess registering metrics in tests? The exception saying metrics already registered because we have more than one daemon in the one jvm. We still have that issue here? We'll still have that. A little bit less spam but not completely gone. Basically when all metrics are moved to metrics2 we'll see 4 or 5 log messages (one per dupe of ReplicationMeticsSource et al.) rather than the massive ammount we see now. Maybe on test we should silience the junit messages from those classes ? Probably a good issue to file for the metrics clean up. Do we have to have metrics2 package? Can this new stuff be in the metrics package? Nope. Earlier you were asking to remove it. So everything is in the metrics namespace. That should make things a little nicer if we go the DI route, that's being discussed on the mailing list, and someone wants to go back to the old hadoop metrics. I thought I saw a patch where you'd renamed the properties file to what LarsG suggested? Nope just replied that we could. That file needs some examples and other love (ganglia examples and examples for regionserver/rest). Seems like a good issue for me to file after this. I'll clean up the two javadocs tomorrow morning.
          Hide
          Elliott Clark added a comment -

          Addressed stack's comments.

          Show
          Elliott Clark added a comment - Addressed stack's comments.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12536404/HBASE-4050-8.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 11 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 10 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.catalog.TestMetaReaderEditor

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12536404/HBASE-4050-8.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 11 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 10 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.catalog.TestMetaReaderEditor Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2383//console This message is automatically generated.
          Hide
          Elliott Clark added a comment -

          Test failure looks un-related. Works on my machine.

          Show
          Elliott Clark added a comment - Test failure looks un-related. Works on my machine.
          Hide
          Ted Yu added a comment -

          Would integrate the patch later this afternoon if there is no further review comment.

          Show
          Ted Yu added a comment - Would integrate the patch later this afternoon if there is no further review comment.
          Hide
          Alex Baranau added a comment -

          Should I add metric sources for RegionServer and Master in a patch? (those from 4050-metrics-v3.patch)

          Show
          Alex Baranau added a comment - Should I add metric sources for RegionServer and Master in a patch? (those from 4050-metrics-v3.patch)
          Hide
          Ted Yu added a comment -

          How long would it take to align them with the work from Elliot ?

          Show
          Ted Yu added a comment - How long would it take to align them with the work from Elliot ?
          Hide
          Alex Baranau added a comment -

          Let me try to do that in coming hour or two. If not - you can proceed without it. I guess.

          Show
          Alex Baranau added a comment - Let me try to do that in coming hour or two. If not - you can proceed without it. I guess.
          Hide
          Elliott Clark added a comment -

          Lets do those in another issue. I'm planning on creating an umbrella jira for the move to metrics2. This would probably also be where most of the metrics clean up work, that lars was asking for, would be listed under.

          So Far I have:

          • Investigate moving to DI (guice) framework for plugin arch.
          • Naming and documenting of the hadoop-metrics2.properties file
          • Create histogram class for metrics 2
          • Move RegionServer Metrics to metrics2
          • Move Master Metrics to metrics 2
          • Move external servers to metrics2 (thrift,thrift2,rest)
          • Investigate having hbase-env.sh decide which hadoop-compat to include
          Show
          Elliott Clark added a comment - Lets do those in another issue. I'm planning on creating an umbrella jira for the move to metrics2. This would probably also be where most of the metrics clean up work, that lars was asking for, would be listed under. So Far I have: Investigate moving to DI (guice) framework for plugin arch. Naming and documenting of the hadoop-metrics2.properties file Create histogram class for metrics 2 Move RegionServer Metrics to metrics2 Move Master Metrics to metrics 2 Move external servers to metrics2 (thrift,thrift2,rest) Investigate having hbase-env.sh decide which hadoop-compat to include
          Hide
          Alex Baranau added a comment -

          This works too.

          I was going to add "almost empty" placeholders for these points:

          Move RegionServer Metrics to metrics2

          Move Master Metrics to metrics 2

          With one metric in each. So that actual moving of metrics can be done separately by placing them in these classes, as was discussed in the first part of comments for the issue. Can do that as a part of other issue (I'd say the code + some unit-test is there, in that early patch)

          Show
          Alex Baranau added a comment - This works too. I was going to add "almost empty" placeholders for these points: Move RegionServer Metrics to metrics2 Move Master Metrics to metrics 2 With one metric in each. So that actual moving of metrics can be done separately by placing them in these classes, as was discussed in the first part of comments for the issue. Can do that as a part of other issue (I'd say the code + some unit-test is there, in that early patch)
          Hide
          Ted Yu added a comment -

          The above is a nice list.
          The title of this JIRA is general sounding.
          Does it make sense to create the above sub-tasks (including Elliot's latest patch) under this JIRA ?

          Show
          Ted Yu added a comment - The above is a nice list. The title of this JIRA is general sounding. Does it make sense to create the above sub-tasks (including Elliot's latest patch) under this JIRA ?
          Hide
          Elliott Clark added a comment -

          Doesn't matter to me too much

          Show
          Elliott Clark added a comment - Doesn't matter to me too much
          Hide
          Elliott Clark added a comment -

          There. After I saw Ted create the first sub task I created all the rest. Alex I assigned some to you. Feel free to assign them as you see fit.

          Show
          Elliott Clark added a comment - There. After I saw Ted create the first sub task I created all the rest. Alex I assigned some to you. Feel free to assign them as you see fit.
          Hide
          Alex Baranau added a comment -

          2Elliott, just to understand:

          MetricsRegistry does not allow metrics to be removed.

          1. As we don't use MetricsRegistry class and basically reimplementing it inside BaseMetricsSourceImpl, should we also allow adding tags (MetricsTag) there? Sounds like a useful feature.

          2. Why do we need to remove metrics?

          Show
          Alex Baranau added a comment - 2Elliott, just to understand: MetricsRegistry does not allow metrics to be removed. 1. As we don't use MetricsRegistry class and basically reimplementing it inside BaseMetricsSourceImpl, should we also allow adding tags (MetricsTag) there? Sounds like a useful feature. 2. Why do we need to remove metrics?
          Hide
          Alex Baranau added a comment -

          After I saw Ted create the first sub task I created all the rest. Alex I assigned some to you.

          Great, thanx!

          Show
          Alex Baranau added a comment - After I saw Ted create the first sub task I created all the rest. Alex I assigned some to you. Great, thanx!
          Hide
          Elliott Clark added a comment -
          1. Yes that does for sure.
          2. When replication sources are turned off we need to remove them. Also pre region metrics and per table metrics need to be removed when they are no longer on a rs.
          Show
          Elliott Clark added a comment - Yes that does for sure. When replication sources are turned off we need to remove them. Also pre region metrics and per table metrics need to be removed when they are no longer on a rs.
          Hide
          Elliott Clark added a comment -

          s/pre region/per region/g

          Show
          Elliott Clark added a comment - s/pre region/per region/g
          Hide
          Alex Baranau added a comment -

          Looks to me that we are going to copy-paste MetricsRegistry with one addition: allow remove metrics. Not nice.

          Btw, it seems to me (if I'm not mistaken) that there's a way to remove metric from MetricRegistry. Though it is far from nice, may be it is OK to depend on it and file a JIRA issue to allow removals? There's a method metrics() that returns map entries (in hadoop1) or map values (in hadoop2) which are backing internal map of metrics and thus can be used to remove items. However:
          1. searching in list for item to remove is not performance efficient. Do we want to do it frequently? I suppose we may, but not sure
          2. in general it's very bad to rely on implementation details (in future they can return copy of values collection)

          Having said that, is it better to duplicate MetricsRegistry class in our code or use/rely on that metrics() method for now + filing JIRA to allow metrics removals?

          Show
          Alex Baranau added a comment - Looks to me that we are going to copy-paste MetricsRegistry with one addition: allow remove metrics. Not nice. Btw, it seems to me (if I'm not mistaken) that there's a way to remove metric from MetricRegistry. Though it is far from nice, may be it is OK to depend on it and file a JIRA issue to allow removals? There's a method metrics() that returns map entries (in hadoop1) or map values (in hadoop2) which are backing internal map of metrics and thus can be used to remove items. However: 1. searching in list for item to remove is not performance efficient. Do we want to do it frequently? I suppose we may, but not sure 2. in general it's very bad to rely on implementation details (in future they can return copy of values collection) Having said that, is it better to duplicate MetricsRegistry class in our code or use/rely on that metrics() method for now + filing JIRA to allow metrics removals?
          Hide
          Ted Yu added a comment -

          filing JIRA to allow metrics removals

          +1 on above.

          Show
          Ted Yu added a comment - filing JIRA to allow metrics removals +1 on above.
          Hide
          Elliott Clark added a comment -
            Collection<MutableMetric> metrics()
          

          Removing from a linked list (the collection backing the map in question) is icky. I filed a jira a while ago. HADOOP-8313
          yes this is not great. As soon as possible I would love to remove the metrics factory and all related maps. That would greatly simplify stuff. I think we should get things working now and refactor away as soon as possible.

          Show
          Elliott Clark added a comment - Collection<MutableMetric> metrics() Removing from a linked list (the collection backing the map in question) is icky. I filed a jira a while ago. HADOOP-8313 yes this is not great. As soon as possible I would love to remove the metrics factory and all related maps. That would greatly simplify stuff. I think we should get things working now and refactor away as soon as possible.
          Hide
          Alex Baranau added a comment -

          Looking at adding MetricSources for Master and RegionServer and have some Qs.

          1) MetricsSystem - always has name "hbase"? What if we want to try single-node cluster? Would be great to separate metrics systems. E.g. in Hadoop code I see "datanode", "namenode", …

          2) I was thinking to extract maps (counters & gauges) into separate class along with the code that manages them. I.e. in new hbase.MetricsRegistry. We can add then Tags in there, etc. I.e. this would be pretty much similar to org.apache.hadoop.metrics2.lib.MetricsRegistry, but will allow also removing metrics (also read the last piece of comment to see why we might want it as well). Thoughts?

          3) Does it make sense to create interfaces for MetricMutableGaugeLong and MetricMutableCounterLong and allow MetricSource implementations to create & obtain them? So that they could be fields in MetricSource classes. This would allow at least:

          • access metrics much faster, without lookup in Map
          • this would remove "proxy methods" from BaseMetricsSourceImpl (or from MetricsRegistry, after extracting it) and will make easier to provide other implementations (in addition to counter & gauge), independently (i.e. without adding proxy methods in BaseMetricsSourceImpl).
            Thoughts?

          4) In case of RegionServer and Master metrics I need to pass initialization parameters. By analogy with DataNode metrics, I'd pass servername, sessionId. As we use ServiceLoader, there's no way to pass parameters to constructor. I was thinking about adding smth like init() method to the MasterMetricsSource (which is in common compat module), but looks not very nice and will not actually work very well with current BaseMetricsSource implementation as things are initialized in constructor there. I.e. I will have to change that and move things into init() method (doesn't look nice to me).

          Another thought. What if slightly "move" what is placed in compat module? What if we put there more "lower-level" intruments. We could:

          • extract hbase.MetricRegistry as per above, make common interface for it and place in hadoop compat module, provide factory for creating hbase.MetricsRegistry there as well
          • define hbase.MetricSource interface with single method getMetricRegistry()
          • create service wrapper of DefaultMetricsSystem, which allows to register hbase.MetricsSource
            (in each hadoop1/2-compat modules hbase.MetricsSource.getMetricRegistry() can be used to implement hadoop.MetricSource's getMetrics() method via getting data from hbase.MetricsRegistry)

          This way we will offer to MetricsSource implementations (in main hbase modules) lower-level tools and will be more flexible (e.g. with passing parameters to MetricSource implementation ctors). Also, MetricSource implementations (in main modules) will look closer to what was intended by Hadoop Metrics framework (judging by datanode/namenode metric sources). We could do what is mentioned in 3rd point above. E.g. typical metricSource will look like:

          public class HMasterMetrics implements MetricsSource {
            final String name;
            [...]
            final MetricsRegistry registry = MetricsRegistryFactory.create("hmaster");
          
            final MutableCounterLong clusterRequests =
                    registry.newCounter("cluster_requests", "", 0);
            […]
          
            public HMasterMetrics(final String name, final String sessionId) {
              this.name = name;
              JvmMetricsSourceFactory.create("HMaster", sessionId);
              registry.setContext(name).tag("sessionId", "", sessionId);
            }
          
            @Override
            public MetricsRegistry getMetrics() {
              return registry;
            }
          
            public void incClusterRequests(int delta) {
              clusterRequests.inc(delta);
            }
            […]
          }
          

          Also, it will be easier to move out wrom using this shim when it is no longer needed by easy switching these shim hbase.MetricsRegistry, hbase.MetricsSource, hbase.DefaultMetricsSystem classes to hadoop.* ones.

          I see a lot of benefits, but may be I'm missing smth. The only fear is that by going with lower-level tools shims we might end up having more of these shim classes/interfaces (like adding shim MetricMutableGaugeLong and MetricMutableCounterLong interfaces in this case so far). Though this might not be that bad.

          Does it makes sense to you guys at all?

          I hope I'm not acting counter-productive by suggesting changes to things that already work (the patch by Elliott)

          Show
          Alex Baranau added a comment - Looking at adding MetricSources for Master and RegionServer and have some Qs. 1) MetricsSystem - always has name "hbase"? What if we want to try single-node cluster? Would be great to separate metrics systems. E.g. in Hadoop code I see "datanode", "namenode", … 2) I was thinking to extract maps (counters & gauges) into separate class along with the code that manages them. I.e. in new hbase.MetricsRegistry. We can add then Tags in there, etc. I.e. this would be pretty much similar to org.apache.hadoop.metrics2.lib.MetricsRegistry, but will allow also removing metrics (also read the last piece of comment to see why we might want it as well). Thoughts? 3) Does it make sense to create interfaces for MetricMutableGaugeLong and MetricMutableCounterLong and allow MetricSource implementations to create & obtain them? So that they could be fields in MetricSource classes. This would allow at least: access metrics much faster, without lookup in Map this would remove "proxy methods" from BaseMetricsSourceImpl (or from MetricsRegistry, after extracting it) and will make easier to provide other implementations (in addition to counter & gauge), independently (i.e. without adding proxy methods in BaseMetricsSourceImpl). Thoughts? 4) In case of RegionServer and Master metrics I need to pass initialization parameters. By analogy with DataNode metrics, I'd pass servername, sessionId. As we use ServiceLoader, there's no way to pass parameters to constructor. I was thinking about adding smth like init() method to the MasterMetricsSource (which is in common compat module), but looks not very nice and will not actually work very well with current BaseMetricsSource implementation as things are initialized in constructor there. I.e. I will have to change that and move things into init() method (doesn't look nice to me). Another thought. What if slightly "move" what is placed in compat module? What if we put there more "lower-level" intruments. We could: extract hbase.MetricRegistry as per above, make common interface for it and place in hadoop compat module, provide factory for creating hbase.MetricsRegistry there as well define hbase.MetricSource interface with single method getMetricRegistry() create service wrapper of DefaultMetricsSystem, which allows to register hbase.MetricsSource (in each hadoop1/2-compat modules hbase.MetricsSource.getMetricRegistry() can be used to implement hadoop.MetricSource's getMetrics() method via getting data from hbase.MetricsRegistry) This way we will offer to MetricsSource implementations (in main hbase modules) lower-level tools and will be more flexible (e.g. with passing parameters to MetricSource implementation ctors). Also, MetricSource implementations (in main modules) will look closer to what was intended by Hadoop Metrics framework (judging by datanode/namenode metric sources). We could do what is mentioned in 3rd point above. E.g. typical metricSource will look like: public class HMasterMetrics implements MetricsSource { final String name; [...] final MetricsRegistry registry = MetricsRegistryFactory.create("hmaster"); final MutableCounterLong clusterRequests = registry.newCounter("cluster_requests", "", 0); […] public HMasterMetrics(final String name, final String sessionId) { this.name = name; JvmMetricsSourceFactory.create("HMaster", sessionId); registry.setContext(name).tag("sessionId", "", sessionId); } @Override public MetricsRegistry getMetrics() { return registry; } public void incClusterRequests(int delta) { clusterRequests.inc(delta); } […] } Also, it will be easier to move out wrom using this shim when it is no longer needed by easy switching these shim hbase.MetricsRegistry, hbase.MetricsSource, hbase.DefaultMetricsSystem classes to hadoop.* ones. I see a lot of benefits, but may be I'm missing smth. The only fear is that by going with lower-level tools shims we might end up having more of these shim classes/interfaces (like adding shim MetricMutableGaugeLong and MetricMutableCounterLong interfaces in this case so far). Though this might not be that bad. Does it makes sense to you guys at all? I hope I'm not acting counter-productive by suggesting changes to things that already work (the patch by Elliott)
          Hide
          Elliott Clark added a comment -
          1. I would like that, except I think that grouping things together all under an hbase node in jmx is nice. Since hadoop too the top one. That leaves us with sharing the metrics system name. So all of the contexts will have to be well named so that it's pretty evident which daemon they came from.
          2. Sure. That would make it easier to remove if we ever get all the methods that we need on the hadoop metrics registry.
          3. But MetricMutableGaugeLong and MetricMutableCounterLong will never implement our interfaces so you would be forced to create whole wrapper classes around every class. I think it would be better if sources just implemented specialized methods for interacting with known MetricMutable's. Then you get all the speed and it abstracts the actual metrics from any code in hbase-server
          4. I was going to handle this by having ServiceLoader find a factory implementation that creates an implementation of the source needed.
          5. As far as moving SourceImpl into hbase-hadoop-compat. It's not really possible because it needs to implement interfaces that come from hadoop. And those interfaces refer to classes that have changed.
          Show
          Elliott Clark added a comment - I would like that, except I think that grouping things together all under an hbase node in jmx is nice. Since hadoop too the top one. That leaves us with sharing the metrics system name. So all of the contexts will have to be well named so that it's pretty evident which daemon they came from. Sure. That would make it easier to remove if we ever get all the methods that we need on the hadoop metrics registry. But MetricMutableGaugeLong and MetricMutableCounterLong will never implement our interfaces so you would be forced to create whole wrapper classes around every class. I think it would be better if sources just implemented specialized methods for interacting with known MetricMutable's. Then you get all the speed and it abstracts the actual metrics from any code in hbase-server I was going to handle this by having ServiceLoader find a factory implementation that creates an implementation of the source needed. As far as moving SourceImpl into hbase-hadoop-compat. It's not really possible because it needs to implement interfaces that come from hadoop. And those interfaces refer to classes that have changed.
          Hide
          Ted Yu added a comment -

          Elliot's first patch used ResourceFinder which allowed passing initialization parameters to ctor.
          Maybe revive that approach ?

          Show
          Ted Yu added a comment - Elliot's first patch used ResourceFinder which allowed passing initialization parameters to ctor. Maybe revive that approach ?
          Hide
          Alex Baranau added a comment -

          Let me try to adjust patch to show what I meant

          Show
          Alex Baranau added a comment - Let me try to adjust patch to show what I meant
          Hide
          Alex Baranau added a comment -

          Tried to sketch what I suggested so that it is easier to follow. Haven't added implementations in hadoop1/2-compat modules of the interfaces I added to hadoop-compat, but they should be pretty much clear: they will be just wrappers/proxies on top of hadoop.metrics2 classes.

          you would be forced to create whole wrapper classes around every class.

          Yeah.. That is what I meant by

          The only fear is that by going with lower-level tools shims we might end up having more of these shim classes/interfaces

          Not sure how many of them will be needed, i.e. can't say how much bad it is. I believe we'll need just several (Elliott's patch uses just 2) and their interfaces are very small.

          Anyhow, I will be completely OK if you turn down my suggestion. Just wanted to share thoughts in which I saw some benefits (though there are cons as well)

          Show
          Alex Baranau added a comment - Tried to sketch what I suggested so that it is easier to follow. Haven't added implementations in hadoop1/2-compat modules of the interfaces I added to hadoop-compat, but they should be pretty much clear: they will be just wrappers/proxies on top of hadoop.metrics2 classes. you would be forced to create whole wrapper classes around every class. Yeah.. That is what I meant by The only fear is that by going with lower-level tools shims we might end up having more of these shim classes/interfaces Not sure how many of them will be needed, i.e. can't say how much bad it is. I believe we'll need just several (Elliott's patch uses just 2) and their interfaces are very small. Anyhow, I will be completely OK if you turn down my suggestion. Just wanted to share thoughts in which I saw some benefits (though there are cons as well)
          Hide
          Elliott Clark added a comment -

          It took me a little to get the patch to apply.

          Looking at it, I don't see anything that's implementing hadoop 1 or 2's MetricSource so none of the metrics implemented in this patch will get exposed. Implementing that part is the part that convinced me that most of the implementation of BaseMetricsSourceImpl needed to be in Hbase-hadoop

          {1,2}

          -compat. Just a registry is not enough to have things exposed through metrics.

          We should move this discussion to HBASE-6411 since this was converted to an umbrella issue and 6411 is for master.

          Show
          Elliott Clark added a comment - It took me a little to get the patch to apply. Looking at it, I don't see anything that's implementing hadoop 1 or 2's MetricSource so none of the metrics implemented in this patch will get exposed. Implementing that part is the part that convinced me that most of the implementation of BaseMetricsSourceImpl needed to be in Hbase-hadoop {1,2} -compat. Just a registry is not enough to have things exposed through metrics. We should move this discussion to HBASE-6411 since this was converted to an umbrella issue and 6411 is for master.
          Hide
          Alex Baranau added a comment -

          Ok... Added all implementations, see HBASE-6411. Also added unit-test to actually test the metrics values (I guess this is smth we need to add for ReplicationMetricSources as well).

          Show
          Alex Baranau added a comment - Ok... Added all implementations, see HBASE-6411 . Also added unit-test to actually test the metrics values (I guess this is smth we need to add for ReplicationMetricSources as well).
          Hide
          Hudson added a comment -

          Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #151 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/151/)
          HBASE-4050 Combine Master Metrics into a single class (Revision 1377896)

          Result = FAILURE
          stack :
          Files :

          • /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSource.java
          • /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceFactory.java
          • /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsWrapper.java
          • /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics/ReplicationMetricsSource.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceFactoryImpl.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics/ReplicationMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSource
          • /hbase/trunk/hbase-hadoop1-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSourceFactory
          • /hbase/trunk/hbase-hadoop1-compat/src/test/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImplTest.java
          • /hbase/trunk/hbase-hadoop1-compat/src/test/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImplTest.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceFactoryImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics/ReplicationMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSource
          • /hbase/trunk/hbase-hadoop2-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSourceFactory
          • /hbase/trunk/hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImplTest.java
          • /hbase/trunk/hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImplTest.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MXBean.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MXBeanImpl.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetrics.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsWrapperImpl.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMXBean.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/metrics
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/metrics/TestMasterMetricsWrapper.java
          Show
          Hudson added a comment - Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #151 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/151/ ) HBASE-4050 Combine Master Metrics into a single class (Revision 1377896) Result = FAILURE stack : Files : /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSource.java /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceFactory.java /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsWrapper.java /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics/ReplicationMetricsSource.java /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceFactoryImpl.java /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics/ReplicationMetricsSourceImpl.java /hbase/trunk/hbase-hadoop1-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSource /hbase/trunk/hbase-hadoop1-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSourceFactory /hbase/trunk/hbase-hadoop1-compat/src/test/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImplTest.java /hbase/trunk/hbase-hadoop1-compat/src/test/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImplTest.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceFactoryImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics/ReplicationMetricsSourceImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSource /hbase/trunk/hbase-hadoop2-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSourceFactory /hbase/trunk/hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImplTest.java /hbase/trunk/hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImplTest.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MXBean.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MXBeanImpl.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetrics.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsWrapperImpl.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMXBean.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/metrics /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/metrics/TestMasterMetricsWrapper.java
          Hide
          Hudson added a comment -

          Integrated in HBase-TRUNK #3286 (See https://builds.apache.org/job/HBase-TRUNK/3286/)
          HBASE-4050 Combine Master Metrics into a single class (Revision 1377896)

          Result = FAILURE
          stack :
          Files :

          • /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSource.java
          • /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceFactory.java
          • /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsWrapper.java
          • /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics/ReplicationMetricsSource.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceFactoryImpl.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics/ReplicationMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSource
          • /hbase/trunk/hbase-hadoop1-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSourceFactory
          • /hbase/trunk/hbase-hadoop1-compat/src/test/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImplTest.java
          • /hbase/trunk/hbase-hadoop1-compat/src/test/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImplTest.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceFactoryImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics/ReplicationMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSource
          • /hbase/trunk/hbase-hadoop2-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSourceFactory
          • /hbase/trunk/hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImplTest.java
          • /hbase/trunk/hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImplTest.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MXBean.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MXBeanImpl.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetrics.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsWrapperImpl.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMXBean.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/metrics
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/metrics/TestMasterMetricsWrapper.java
          Show
          Hudson added a comment - Integrated in HBase-TRUNK #3286 (See https://builds.apache.org/job/HBase-TRUNK/3286/ ) HBASE-4050 Combine Master Metrics into a single class (Revision 1377896) Result = FAILURE stack : Files : /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSource.java /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceFactory.java /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsWrapper.java /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics/ReplicationMetricsSource.java /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceFactoryImpl.java /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics/ReplicationMetricsSourceImpl.java /hbase/trunk/hbase-hadoop1-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSource /hbase/trunk/hbase-hadoop1-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSourceFactory /hbase/trunk/hbase-hadoop1-compat/src/test/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImplTest.java /hbase/trunk/hbase-hadoop1-compat/src/test/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImplTest.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceFactoryImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/metrics/ReplicationMetricsSourceImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSource /hbase/trunk/hbase-hadoop2-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.master.metrics.MasterMetricsSourceFactory /hbase/trunk/hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImplTest.java /hbase/trunk/hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImplTest.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MXBean.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MXBeanImpl.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetrics.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsWrapperImpl.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMXBean.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/metrics /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/metrics/TestMasterMetricsWrapper.java
          Hide
          Hudson added a comment -

          Integrated in HBase-TRUNK #3304 (See https://builds.apache.org/job/HBase-TRUNK/3304/)
          HBASE-4050 Clean up BaseMetricsSourceImpl (Revision 1381008)

          Result = FAILURE
          stack :
          Files :

          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java
          Show
          Hudson added a comment - Integrated in HBase-TRUNK #3304 (See https://builds.apache.org/job/HBase-TRUNK/3304/ ) HBASE-4050 Clean up BaseMetricsSourceImpl (Revision 1381008) Result = FAILURE stack : Files : /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java
          Hide
          Hudson added a comment -

          Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #160 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/160/)
          HBASE-4050 Clean up BaseMetricsSourceImpl (Revision 1381008)

          Result = FAILURE
          stack :
          Files :

          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java
          Show
          Hudson added a comment - Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #160 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/160/ ) HBASE-4050 Clean up BaseMetricsSourceImpl (Revision 1381008) Result = FAILURE stack : Files : /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetricsSourceImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/metrics/BaseMetricsSourceImpl.java
          Hide
          stack added a comment -

          What is to be done w/ the patches on this issue? They are taken up elsewhere? The commit above probably came of a commit of another issue that had wrong JIRA number is my guess.

          Show
          stack added a comment - What is to be done w/ the patches on this issue? They are taken up elsewhere? The commit above probably came of a commit of another issue that had wrong JIRA number is my guess.
          Hide
          Elliott Clark added a comment -

          One of the patches above was the one that added the compat jars and the first versions of some of the classes that were used. A couple patches have since been committed with hbase-4050 in the commit message even though the code was from sub-issues.

          This really has one issue left to solve and then the metrics2 move will be complete. Now that all testing is possible and all the helper classes are in (MetricsAssertHelper, MetricsHistogram, and MetricQuantile) just the region server is left. I'll start work on that. However it will be the most detail oriented so it might take a little longer than some of the other sub-issues.

          Show
          Elliott Clark added a comment - One of the patches above was the one that added the compat jars and the first versions of some of the classes that were used. A couple patches have since been committed with hbase-4050 in the commit message even though the code was from sub-issues. This really has one issue left to solve and then the metrics2 move will be complete. Now that all testing is possible and all the helper classes are in (MetricsAssertHelper, MetricsHistogram, and MetricQuantile) just the region server is left. I'll start work on that. However it will be the most detail oriented so it might take a little longer than some of the other sub-issues.
          Hide
          Enis Soztutar added a comment -

          Last update indicates that there is still some more work to be done on this. Elliot, since this is a critical, would you be able to work on it by 0.95.1 or 0.95.2 timeframe.

          Show
          Enis Soztutar added a comment - Last update indicates that there is still some more work to be done on this. Elliot, since this is a critical, would you be able to work on it by 0.95.1 or 0.95.2 timeframe.
          Hide
          Elliott Clark added a comment -

          There's an apache blog post explaining all this that should be up soon.

          Show
          Elliott Clark added a comment - There's an apache blog post explaining all this that should be up soon.
          Hide
          stack added a comment -

          Just waiting on doc. All else is in.

          Show
          stack added a comment - Just waiting on doc. All else is in.
          Hide
          Elliott Clark added a comment -

          All of the tasks that must be done have been completed.

          Yay

          Show
          Elliott Clark added a comment - All of the tasks that must be done have been completed. Yay
          Hide
          stack added a comment -

          whhop!

          Show
          stack added a comment - whhop!

            People

            • Assignee:
              Elliott Clark
              Reporter:
              Eric Yang
            • Votes:
              3 Vote for this issue
              Watchers:
              23 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development