Details

    • Type: Sub-task Sub-task
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.99.0, hbase-10191
    • Fix Version/s: 0.99.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Pull in Netty 4 and sort out the consequences.

      1. 10573.patch
        20 kB
        Andrew Purtell
      2. 10573.patch
        29 kB
        Andrew Purtell
      3. 10573.v3.patch
        25 kB
        Nicolas Liochon
      4. 10573.add.patch
        2 kB
        Nicolas Liochon

        Activity

        Hide
        Andrew Purtell added a comment -

        I have brought in Netty 4 and excluded Netty 3 from our Hadoop dependencies. It "works".

        It is ugly that hadoop-mapreduce-client-core and hadoop-mapreduce-client-jobclient bring in Netty 3.x as a dependency, possible complications for MR applications using HBase are concerning. So does hadoop-minicluser, but this is completely understandable, so I bring in the old 3.x Netty at test scope, and tests pass.

        In terms of HBase code I needed to reimplement the multicast ClusterStatusPublisher (not sure if it works, no test for it) and modify TestFuzzyRowAndColumnRangeFilter.

        Show
        Andrew Purtell added a comment - I have brought in Netty 4 and excluded Netty 3 from our Hadoop dependencies. It "works". It is ugly that hadoop-mapreduce-client-core and hadoop-mapreduce-client-jobclient bring in Netty 3.x as a dependency, possible complications for MR applications using HBase are concerning. So does hadoop-minicluser, but this is completely understandable, so I bring in the old 3.x Netty at test scope, and tests pass. In terms of HBase code I needed to reimplement the multicast ClusterStatusPublisher (not sure if it works, no test for it) and modify TestFuzzyRowAndColumnRangeFilter.
        Hide
        Nicolas Liochon added a comment -

        ClusterStatusPublisher

        The test is TestHCM#testClusterStatus, it seems to fail under some ide.

        Show
        Nicolas Liochon added a comment - ClusterStatusPublisher The test is TestHCM#testClusterStatus, it seems to fail under some ide.
        Hide
        Nick Dimiduk added a comment -

        I'm curious about the decision to use Netty's classes here. In particular, I believe the reason Matt Corgan introduced the ByteRange interface was because none of the existing "byte buffer" concepts allowed for buffer re-use. The concern being for excessive GC during compactions. I looked at Netty's ByteBuf a while back and found it didn't support instance reuse, thus was insufficient for his requirement. Did I miss something?

        Show
        Nick Dimiduk added a comment - I'm curious about the decision to use Netty's classes here. In particular, I believe the reason Matt Corgan introduced the ByteRange interface was because none of the existing "byte buffer" concepts allowed for buffer re-use. The concern being for excessive GC during compactions. I looked at Netty's ByteBuf a while back and found it didn't support instance reuse, thus was insufficient for his requirement. Did I miss something?
        Hide
        Nicolas Liochon added a comment -

        I actually have the same question about using netty byteBuf (not netty itself). I wonder if we won't have an issue, for example when we will want to pass the buffer from the hbase socket to hdfs. The Java API is very very bad, and does not even support extension, so I understand why Netty had to rewrite it. But I'm not sure about the interoperability then.

        Note that I'm not a Netty expert, especially not a Netty 4 one, so my concerns may be just off.

        Show
        Nicolas Liochon added a comment - I actually have the same question about using netty byteBuf (not netty itself). I wonder if we won't have an issue, for example when we will want to pass the buffer from the hbase socket to hdfs. The Java API is very very bad, and does not even support extension, so I understand why Netty had to rewrite it. But I'm not sure about the interoperability then. Note that I'm not a Netty expert, especially not a Netty 4 one, so my concerns may be just off.
        Hide
        Andrew Purtell added a comment -

        I'm curious about the decision to use Netty's classes here

        Call it an investigation.

        • We use Netty already
        • Composite buffers
        • Arena allocation
        • Dynamic buffer resizing
        • Reference counting
        • Dev and testing by another community

        I looked at Netty's ByteBuf a while back and found it didn't support instance reuse, thus was insufficient for his requirement. Did I miss something?

        If that is the most important consideration above all else, outweighing all positives, then you did not miss something.

        What specifically would you suggest in the alternative?

        Show
        Andrew Purtell added a comment - I'm curious about the decision to use Netty's classes here Call it an investigation. We use Netty already Composite buffers Arena allocation Dynamic buffer resizing Reference counting Dev and testing by another community I looked at Netty's ByteBuf a while back and found it didn't support instance reuse, thus was insufficient for his requirement. Did I miss something? If that is the most important consideration above all else, outweighing all positives, then you did not miss something. What specifically would you suggest in the alternative?
        Hide
        Nick Dimiduk added a comment -

        What specifically would you suggest in the alternative?

        I don't have anything else at this time, perhaps except for ByteBuffers + some reflection voodoo. This has the benefit of sticking with the JVM "native" APIs, but has the down side of reflection voodoo. I wrote up a little benchmark a while back, comparing allocating new DirectByteBuffers vs. reusing a single instance and re-assigning it with reflection. Reflection was slower than a single allocation, but I it didn't account for the collection afterwords. I also don't think the synthetic microbenchmark is indicative of use in the real system. In any case, even if dbb + reflection proves viable, we don't get the many other benefits you itemize above. DirectByteBuffers + reflection does open up the possibility of using the unsafe directly to manage memory, which may be desirable.

        As you say, more investigation is necessary

        Show
        Nick Dimiduk added a comment - What specifically would you suggest in the alternative? I don't have anything else at this time, perhaps except for ByteBuffers + some reflection voodoo. This has the benefit of sticking with the JVM "native" APIs, but has the down side of reflection voodoo. I wrote up a little benchmark a while back, comparing allocating new DirectByteBuffers vs. reusing a single instance and re-assigning it with reflection. Reflection was slower than a single allocation, but I it didn't account for the collection afterwords. I also don't think the synthetic microbenchmark is indicative of use in the real system. In any case, even if dbb + reflection proves viable, we don't get the many other benefits you itemize above. DirectByteBuffers + reflection does open up the possibility of using the unsafe directly to manage memory, which may be desirable. As you say, more investigation is necessary
        Hide
        Andrew Purtell added a comment -

        perhaps except for ByteBuffers + some reflection voodoo

        Voodoo is kind of nonspecific as far as implementation strategy descriptions go.

        I guess we need to reuse such a wrapper object if we must take a reflection-based instantiation hit each time.

        DirectByteBuffers + reflection does open up the possibility of using the unsafe directly to manage memory, which may be desirable.

        Isn't this undesirable? Unsafe is a vendor specific extension. Even then Oracle recently ran a public survey asking what uses of Unsafe are common and what would happen if it went away. We do use Unsafe and have some exposure to this, but do have an un-Unsafe fallback in those places.

        This has the benefit of sticking with the JVM "native" APIs,

        Don't the "native" ByteBuffer method calls tend not to be inlined? I have heard the complaint but have not personally examined JIT disassembly (yet). Aren't there boundary checks and index compensations sprinkled throughout? (which Netty does away with in the simple ByteBuf types)

        As you say, more investigation is necessary

        Great, let's proceed. At the moment this issue is about what happens if you even try to bring in 4. On that, N pointed me to TestHCM#testClusterStatus, which tests the multicast status publisher he implemented with Netty 3 channels. My port of that to Netty 4 APIs fails if I remove the @Ignore decoration, so I don't have it right yet.

        Show
        Andrew Purtell added a comment - perhaps except for ByteBuffers + some reflection voodoo Voodoo is kind of nonspecific as far as implementation strategy descriptions go. I guess we need to reuse such a wrapper object if we must take a reflection-based instantiation hit each time. DirectByteBuffers + reflection does open up the possibility of using the unsafe directly to manage memory, which may be desirable. Isn't this undesirable? Unsafe is a vendor specific extension. Even then Oracle recently ran a public survey asking what uses of Unsafe are common and what would happen if it went away. We do use Unsafe and have some exposure to this, but do have an un-Unsafe fallback in those places. This has the benefit of sticking with the JVM "native" APIs, Don't the "native" ByteBuffer method calls tend not to be inlined? I have heard the complaint but have not personally examined JIT disassembly (yet). Aren't there boundary checks and index compensations sprinkled throughout? (which Netty does away with in the simple ByteBuf types) As you say, more investigation is necessary Great, let's proceed. At the moment this issue is about what happens if you even try to bring in 4. On that, N pointed me to TestHCM#testClusterStatus, which tests the multicast status publisher he implemented with Netty 3 channels. My port of that to Netty 4 APIs fails if I remove the @Ignore decoration, so I don't have it right yet.
        Hide
        Andrew Purtell added a comment -

        I wonder if we won't have an issue, for example when we will want to pass the buffer from the hbase socket to hdfs.

        Yes Nicolas Liochon, I worry about this also.

        Show
        Andrew Purtell added a comment - I wonder if we won't have an issue, for example when we will want to pass the buffer from the hbase socket to hdfs. Yes Nicolas Liochon , I worry about this also.
        Hide
        Nick Dimiduk added a comment -

        Voodoo is kind of nonspecific as far as implementation strategy descriptions go.

        Agreed. I'm happy to share my experiment (in a more appropriate place than this Netty ticket perhaps), but here's a gist.

        Isn't this undesirable? Unsafe is a vendor specific extension.

        Yes, in the long term. My understanding is that the OpenJDK is moving to make some of these facilities more accessible. The timeline appears to be for Java9 or later, as far as I can tell. I think it would be reasonable to consider use of Unsafe for any short-medium term implementation.

        Don't the "native" ByteBuffer method calls tend not to be inlined?

        Unknown.

        Great, let's proceed.

        Yes, please proceed! I'll discontinue my conjecture and instead have a look at your work

        Show
        Nick Dimiduk added a comment - Voodoo is kind of nonspecific as far as implementation strategy descriptions go. Agreed. I'm happy to share my experiment (in a more appropriate place than this Netty ticket perhaps), but here's a gist . Isn't this undesirable? Unsafe is a vendor specific extension. Yes, in the long term. My understanding is that the OpenJDK is moving to make some of these facilities more accessible. The timeline appears to be for Java9 or later, as far as I can tell. I think it would be reasonable to consider use of Unsafe for any short-medium term implementation. Don't the "native" ByteBuffer method calls tend not to be inlined? Unknown. Great, let's proceed. Yes, please proceed! I'll discontinue my conjecture and instead have a look at your work
        Hide
        Matt Corgan added a comment -

        Andrew Purtell on HBASE-10191

        We could make the investment of writing our own slab allocator. Experiments with Netty 4 ByteBufs are in part about seeing if we can re-use open source in production already rather than redo the work. On the other hand, it could be a crucial component so maybe it's necessary to have complete control. Perhaps we can move additional comments on this sub-topic over to HBASE-10573?

        Writing from scratch does seem attractive for keeping it simple and targeted at hbase's use cases. It could probably be hbase-unaware and have a dedicated test suite. The basic concepts are pretty straightforward - most of the complexity would probably arise in concurrency related operations like ref-counting to know when a slab is 100% safe to recycle. When a block is copied to a new slab, new readers can use the new location, and old readers can still use the block on the old slab, but you have to be sure to wait for all the old readers to finish before recycling the slab. You have to wait for straggling readers and be sure to decrement the ref-counts for errored readers, etc.

        Show
        Matt Corgan added a comment - Andrew Purtell on HBASE-10191 We could make the investment of writing our own slab allocator. Experiments with Netty 4 ByteBufs are in part about seeing if we can re-use open source in production already rather than redo the work. On the other hand, it could be a crucial component so maybe it's necessary to have complete control. Perhaps we can move additional comments on this sub-topic over to HBASE-10573 ? Writing from scratch does seem attractive for keeping it simple and targeted at hbase's use cases. It could probably be hbase-unaware and have a dedicated test suite. The basic concepts are pretty straightforward - most of the complexity would probably arise in concurrency related operations like ref-counting to know when a slab is 100% safe to recycle. When a block is copied to a new slab, new readers can use the new location, and old readers can still use the block on the old slab, but you have to be sure to wait for all the old readers to finish before recycling the slab. You have to wait for straggling readers and be sure to decrement the ref-counts for errored readers, etc.
        Hide
        Andrew Purtell added a comment -

        We could make the investment of writing our own slab allocator.

        Writing from scratch does seem attractive for keeping it simple and targeted at hbase's use cases.

        See HBASE-10655 for follow up.

        There are other reasons for using Netty 4 in addition to ByteBuf. Will proceed with this issue a bit further. We can use ByteRange so as to be allocator agnostic.

        Show
        Andrew Purtell added a comment - We could make the investment of writing our own slab allocator. Writing from scratch does seem attractive for keeping it simple and targeted at hbase's use cases. See HBASE-10655 for follow up. There are other reasons for using Netty 4 in addition to ByteBuf. Will proceed with this issue a bit further. We can use ByteRange so as to be allocator agnostic.
        Hide
        stack added a comment -

        I wonder if we won't have an issue, for example when we will want to pass the buffer from the hbase socket to hdfs.

        Do we mean passing netty bytebuf from hbase to hdfs in the above? We will want to go both ways – out and in.

        The list of netty(5) benefits are long as Andrew notes so even if we skirt ByteBuf – no reuse probably simplifies implementation would be my guess – this ticket is worthwhile.

        Show
        stack added a comment - I wonder if we won't have an issue, for example when we will want to pass the buffer from the hbase socket to hdfs. Do we mean passing netty bytebuf from hbase to hdfs in the above? We will want to go both ways – out and in. The list of netty(5) benefits are long as Andrew notes so even if we skirt ByteBuf – no reuse probably simplifies implementation would be my guess – this ticket is worthwhile.
        Hide
        Andrew Purtell added a comment -

        Just to close the circle on the current patch, the multicast ClusterStatusPublisher and ClusterStatusListener uses Netty 3 APIs that go away in later versions. I made a stab at "porting" but didn't end up with functional code according to the (disabled) TestHCM unit test for it. Updated version of the patch simply removes this code.

        Show
        Andrew Purtell added a comment - Just to close the circle on the current patch, the multicast ClusterStatusPublisher and ClusterStatusListener uses Netty 3 APIs that go away in later versions. I made a stab at "porting" but didn't end up with functional code according to the (disabled) TestHCM unit test for it. Updated version of the patch simply removes this code.
        Hide
        stack added a comment -

        Nicolas Liochon What you think? Ok to disable the multicast or you think it required going forward? Thanks boss.

        Show
        stack added a comment - Nicolas Liochon What you think? Ok to disable the multicast or you think it required going forward? Thanks boss.
        Hide
        Nicolas Liochon added a comment -

        in 0.98 and previous, it's the only solution to limit the impact of hbase.rpc.timeout. In trunk we're a little bit better since I decoupled the socket timeout from the rpc timeout, but it's still fresh, and as well multicasting the message will still be more efficient, by ~10 seconds in many cases. So yes, I really want to keep it.

        I can give a try to the Netty 4 conversion, but not before end of next week, or may be in two weeks.

        Show
        Nicolas Liochon added a comment - in 0.98 and previous, it's the only solution to limit the impact of hbase.rpc.timeout. In trunk we're a little bit better since I decoupled the socket timeout from the rpc timeout, but it's still fresh, and as well multicasting the message will still be more efficient, by ~10 seconds in many cases. So yes, I really want to keep it. I can give a try to the Netty 4 conversion, but not before end of next week, or may be in two weeks.
        Hide
        Nicolas Liochon added a comment -

        Here is a version that works here. It seems that the handlers are surprised by the Datagram objects.... I removed them to do the work directly, they don't add much.

        Show
        Nicolas Liochon added a comment - Here is a version that works here. It seems that the handlers are surprised by the Datagram objects.... I removed them to do the work directly, they don't add much.
        Hide
        stack added a comment -

        lgtm +1 if tests pass.

        We still have to enable the clusterstatus broadcast, right?

        It seems that the handlers are surprised by the Datagram objects....

        What does the above mean? The listener is on the client side, not on the server-side.

        Show
        stack added a comment - lgtm +1 if tests pass. We still have to enable the clusterstatus broadcast, right? It seems that the handlers are surprised by the Datagram objects.... What does the above mean? The listener is on the client side, not on the server-side.
        Hide
        Nicolas Liochon added a comment -

        We still have to enable the clusterstatus broadcast, right?

        Yes

        What does the above mean?

        Netty also uses this "handler" wording. An handler takes something on the pipeline and puts something back. But it does not work exactly as it used to in Netty 3.x, so I ended up with a single handler instead of 2.

        Show
        Nicolas Liochon added a comment - We still have to enable the clusterstatus broadcast, right? Yes What does the above mean? Netty also uses this "handler" wording. An handler takes something on the pipeline and puts something back. But it does not work exactly as it used to in Netty 3.x, so I ended up with a single handler instead of 2.
        Hide
        stack added a comment -

        I see.

        +1 if tests pass for you.

        Show
        stack added a comment - I see. +1 if tests pass for you.
        Hide
        stack added a comment -

        Any luck [~liochon]

        Show
        stack added a comment - Any luck [~liochon]
        Hide
        stack added a comment -

        I ran unit tests over here and all seemed to pass:

        ...
        Running org.apache.hadoop.hbase.security.token.TestTokenAuthentication
        Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.343 sec
        Running org.apache.hadoop.hbase.security.visibility.TestVisibilityLabels
        Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.163 sec
        Running org.apache.hadoop.hbase.migration.TestUpgradeTo96
        Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.559 sec
        Running org.apache.hadoop.hbase.migration.TestNamespaceUpgrade
        Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.436 sec

        Results :

        Tests run: 1187, Failures: 0, Errors: 0, Skipped: 5

        Show
        stack added a comment - I ran unit tests over here and all seemed to pass: ... Running org.apache.hadoop.hbase.security.token.TestTokenAuthentication Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.343 sec Running org.apache.hadoop.hbase.security.visibility.TestVisibilityLabels Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.163 sec Running org.apache.hadoop.hbase.migration.TestUpgradeTo96 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.559 sec Running org.apache.hadoop.hbase.migration.TestNamespaceUpgrade Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.436 sec Results : Tests run: 1187, Failures: 0, Errors: 0, Skipped: 5
        Hide
        Nicolas Liochon added a comment -

        I committed it. I was waiting for hadoop-qa feedback (you know all this code analysis...) but it seems it will never come, so...

        Thanks for the review!

        Show
        Nicolas Liochon added a comment - I committed it. I was waiting for hadoop-qa feedback (you know all this code analysis...) but it seems it will never come, so... Thanks for the review!
        Hide
        stack added a comment -

        Did we break hbase on java6? See https://builds.apache.org/job/PreCommit-HBASE-Build/9550//testReport/org.apache.hadoop.hbase.client/TestHCM/org_apache_hadoop_hbase_client_TestHCM/ where it complains:

        Caused by: java.lang.UnsupportedOperationException: Only supported on java 7+.
        at io.netty.channel.socket.nio.NioDatagramChannel.checkJavaVersion(NioDatagramChannel.java:103)
        at io.netty.channel.socket.nio.NioDatagramChannel.joinGroup(NioDatagramChannel.java:381)
        at org.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher.connect(ClusterStatusPublisher.java:271)
        ...

        I built with java7.

        Show
        stack added a comment - Did we break hbase on java6? See https://builds.apache.org/job/PreCommit-HBASE-Build/9550//testReport/org.apache.hadoop.hbase.client/TestHCM/org_apache_hadoop_hbase_client_TestHCM/ where it complains: Caused by: java.lang.UnsupportedOperationException: Only supported on java 7+. at io.netty.channel.socket.nio.NioDatagramChannel.checkJavaVersion(NioDatagramChannel.java:103) at io.netty.channel.socket.nio.NioDatagramChannel.joinGroup(NioDatagramChannel.java:381) at org.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher.connect(ClusterStatusPublisher.java:271) ... I built with java7.
        Hide
        Nicolas Liochon added a comment -

        Likely.... Ouch. Do we still support java6 on 1.0?
        Anyway we can deactivate the test to ends the build?

        I'm on my phone, I can't do it right now but I will do it in a few hours...

        Show
        Nicolas Liochon added a comment - Likely.... Ouch. Do we still support java6 on 1.0? Anyway we can deactivate the test to ends the build? I'm on my phone, I can't do it right now but I will do it in a few hours...
        Hide
        Hudson added a comment -

        SUCCESS: Integrated in HBase-TRUNK #5135 (See https://builds.apache.org/job/HBase-TRUNK/5135/)
        HBASE-10573 Use Netty 4 (nkeywal: rev 1596192)

        • /hbase/trunk/hbase-client/pom.xml
        • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientIdGenerator.java
        • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterStatusListener.java
        • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Addressing.java
        • /hbase/trunk/hbase-prefix-tree/pom.xml
        • /hbase/trunk/hbase-server/pom.xml
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowAndColumnRangeFilter.java
        • /hbase/trunk/pom.xml
        Show
        Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK #5135 (See https://builds.apache.org/job/HBase-TRUNK/5135/ ) HBASE-10573 Use Netty 4 (nkeywal: rev 1596192) /hbase/trunk/hbase-client/pom.xml /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientIdGenerator.java /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterStatusListener.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Addressing.java /hbase/trunk/hbase-prefix-tree/pom.xml /hbase/trunk/hbase-server/pom.xml /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowAndColumnRangeFilter.java /hbase/trunk/pom.xml
        Hide
        Nicolas Liochon added a comment -

        10573.add.patch contains what I plan to commit once the migration to git is done.

        Show
        Nicolas Liochon added a comment - 10573.add.patch contains what I plan to commit once the migration to git is done.
        Hide
        Nicolas Liochon added a comment -

        committed. Hopefully we're done.

        Show
        Nicolas Liochon added a comment - committed. Hopefully we're done.
        Hide
        Hudson added a comment -

        FAILURE: Integrated in HBase-TRUNK #5139 (See https://builds.apache.org/job/HBase-TRUNK/5139/)
        HBASE-10573 Use Netty 4 - addendum (nkeywal: rev 220037c465735f3f7c88fa1cdd966a872df714e5)

        • hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
        Show
        Hudson added a comment - FAILURE: Integrated in HBase-TRUNK #5139 (See https://builds.apache.org/job/HBase-TRUNK/5139/ ) HBASE-10573 Use Netty 4 - addendum (nkeywal: rev 220037c465735f3f7c88fa1cdd966a872df714e5) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
        Hide
        Enis Soztutar added a comment -

        Closing this issue after 0.99.0 release.

        Show
        Enis Soztutar added a comment - Closing this issue after 0.99.0 release.

          People

          • Assignee:
            Nicolas Liochon
            Reporter:
            Andrew Purtell
          • Votes:
            0 Vote for this issue
            Watchers:
            12 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development