Details

    • Type: Sub-task Sub-task
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.99.0, hbase-10070
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Running the integration test in HBASE-10572, and HBASE-10355, it seems that we need some changes for cache invalidation of meta entries from the client side in backup RPCs.

      Mainly the RPC's made for replicas should not invalidate the cache for all the replicas (for example on RegionMovedException, connection error etc).

      1. 0029-HBASE-10701-Cache-invalidation-improvements-from-cli.patch
        60 kB
        Enis Soztutar
      2. hbase-10701_v5.patch
        83 kB
        Enis Soztutar
      3. hbase-10701_v4.patch
        64 kB
        Enis Soztutar
      4. hbase-10701_v1.patch
        46 kB
        Enis Soztutar
      5. hbase-10701_v3.patch
        64 kB
        Enis Soztutar
      6. hbase-10701_v2.patch
        64 kB
        Enis Soztutar
      7. hbase-10701_v1.patch
        46 kB
        Enis Soztutar

        Issue Links

          Activity

          Hide
          Enis Soztutar added a comment -

          Attaching a v1 patch, which contains the fix. But give me some more time, because I am still debugging some other related issues.

          Show
          Enis Soztutar added a comment - Attaching a v1 patch, which contains the fix. But give me some more time, because I am still debugging some other related issues.
          Hide
          Enis Soztutar added a comment -

          Attaching a secondary patch, which fixes three interrelated issues. Fortunately, with this patch, the test HBASE-10572 is able to run on an 8 node cluster for 100min with CM.

          The changes include:

          1. Individual RPC's for replicas can receive exceptions (RegionMovedException, etc) and also connection exceptions. Now the cache invalidation is done so that only the cache entry for the replica location will be cleared instead of the whole cached meta row.
          2. When a server is killed, it's locations are removed from the cache. But after some time, only the primary region info will be left in the cache, and unless we go and look at the meta again, we won't know about the region replicas. So no secondary RPC's will be done unless the primary RPC timesout. I fixed it so that individual locations in RegionLocations are not set to null, instead individual HRL.serverName's are set to null. This enables the RPC layer to know about the replicas, but the locations might still be null which will trigger a meta lookup. There are still some failures in the AP code path that I am investigating.
          3. RpcRetryingCallerWithReadReplicas used to schedule the RPC's to primary and secondaries, and wait for the first result regardless of whether it is an exception or success. In case of a close connection, one of the RPC's will immediately return with an DoNotRetryEx, and will fail the whole get() operation, although we should be able to read from the other replicas perfectly fine. I changed the code path so that it waits for the first successful operation, a cancellation or interrupt, or for all operations to fail with DoNotRetryEx or RetriesExhaustedEx.

          Nicolas Liochon could you please take a close look?

          Show
          Enis Soztutar added a comment - Attaching a secondary patch, which fixes three interrelated issues. Fortunately, with this patch, the test HBASE-10572 is able to run on an 8 node cluster for 100min with CM. The changes include: Individual RPC's for replicas can receive exceptions (RegionMovedException, etc) and also connection exceptions. Now the cache invalidation is done so that only the cache entry for the replica location will be cleared instead of the whole cached meta row. When a server is killed, it's locations are removed from the cache. But after some time, only the primary region info will be left in the cache, and unless we go and look at the meta again, we won't know about the region replicas. So no secondary RPC's will be done unless the primary RPC timesout. I fixed it so that individual locations in RegionLocations are not set to null, instead individual HRL.serverName's are set to null. This enables the RPC layer to know about the replicas, but the locations might still be null which will trigger a meta lookup. There are still some failures in the AP code path that I am investigating. RpcRetryingCallerWithReadReplicas used to schedule the RPC's to primary and secondaries, and wait for the first result regardless of whether it is an exception or success. In case of a close connection, one of the RPC's will immediately return with an DoNotRetryEx, and will fail the whole get() operation, although we should be able to read from the other replicas perfectly fine. I changed the code path so that it waits for the first successful operation, a cancellation or interrupt, or for all operations to fail with DoNotRetryEx or RetriesExhaustedEx. Nicolas Liochon could you please take a close look?
          Hide
          Nicolas Liochon added a comment -

          yes. Will do that for tomorrow.

          Show
          Nicolas Liochon added a comment - yes. Will do that for tomorrow.
          Hide
          Nicolas Liochon added a comment -

          Bytes.toString(regionName));

          Should be a "Bytes.toStringBinary"

          RegionLocations

          Seems ok to me.

          MetaReader

              } catch (Exception parseEx) {
                // Ignore. This is used with tableName passed as regionName.
              }
          

          This is scary. We're wild on catching here, and this denotes a design issue (may be an old one?)

          AsyncProcess / ConnectionManager

          Are we sure that locateRegionInMeta will not return a location with a null serverName?
          AsyncProcess doesn't retry if the location is null because it expects that there is already a retry there.

          RpcRetryingCallerWithReadReplicas

          long start = System.nanoTime();
          This is not used

          if (exceptions == null) exceptions = new ArrayList<ExecutionException>(rl.size());
          the condition is always true here

          Result result = null;
          The scope could be narrowed.

          // the primary call failed with RetriesExhaustedException or DoNotRetryIOException

          In case of a close connection

          This should not be the case. This is something we should retry (and we do iirc). But may be you were speaking about another close?
          Note that I'm not against waiting until the bitter end. But if there is a coding/logic error by the client/customer, (usually caught by DoNotRetryIOException), we will go on all servers instead of only once, that's why I preferred the first approach in the original implementation. DoNotRetryIOException means "logic error" usually...

          Show
          Nicolas Liochon added a comment - Bytes.toString(regionName)); Should be a "Bytes.toStringBinary" RegionLocations Seems ok to me. MetaReader } catch (Exception parseEx) { // Ignore. This is used with tableName passed as regionName. } This is scary. We're wild on catching here, and this denotes a design issue (may be an old one?) AsyncProcess / ConnectionManager Are we sure that locateRegionInMeta will not return a location with a null serverName? AsyncProcess doesn't retry if the location is null because it expects that there is already a retry there. RpcRetryingCallerWithReadReplicas long start = System.nanoTime(); This is not used if (exceptions == null) exceptions = new ArrayList<ExecutionException>(rl.size()); the condition is always true here Result result = null; The scope could be narrowed. // the primary call failed with RetriesExhaustedException or DoNotRetryIOException In case of a close connection This should not be the case. This is something we should retry (and we do iirc). But may be you were speaking about another close? Note that I'm not against waiting until the bitter end. But if there is a coding/logic error by the client/customer, (usually caught by DoNotRetryIOException), we will go on all servers instead of only once, that's why I preferred the first approach in the original implementation. DoNotRetryIOException means "logic error" usually...
          Hide
          Enis Soztutar added a comment -

          Thanks Nicolas for taking a look.

          This is scary. We're wild on catching here, and this denotes a design issue (may be an old one?)

          We are cathing exception only raised by parse. The old design issue is that we have the tableNameOrRegionName nonsense passed as the argument here from HBaseAdmin. We are sending the table name to be parsed as regionName. I did not want to change the API there, since it is unrelated.

          Are we sure that locateRegionInMeta will not return a location with a null serverName?

          MetaCache CAN return HRL's with null serverNames with this patch. This won't happen from the results from meta. So locateRegionInMeta() call MIGHT return HRL's with null server names if it comes from cache. When we invalidate the cache entries from a server (on connection error for example), the cache then forgets about that replica, which means that we won't schedule backup RPC's to that replica at all. We can change the cache HRL nulling behavior as it is, but keep an int for max replicaId in the cache if you think that would be better design.

          the condition is always true here

          It is inside a while loop.

          DoNotRetryIOException means "logic error" usually...

          I see. If in everycase a DoNotRetryIOException, there is a logic error, I'll also rethrow this from primary without scheduling replicas. But you agree with the RetriesExhausted to be tried on every server no matter what, right ?

          Show
          Enis Soztutar added a comment - Thanks Nicolas for taking a look. This is scary. We're wild on catching here, and this denotes a design issue (may be an old one?) We are cathing exception only raised by parse. The old design issue is that we have the tableNameOrRegionName nonsense passed as the argument here from HBaseAdmin. We are sending the table name to be parsed as regionName. I did not want to change the API there, since it is unrelated. Are we sure that locateRegionInMeta will not return a location with a null serverName? MetaCache CAN return HRL's with null serverNames with this patch. This won't happen from the results from meta. So locateRegionInMeta() call MIGHT return HRL's with null server names if it comes from cache. When we invalidate the cache entries from a server (on connection error for example), the cache then forgets about that replica, which means that we won't schedule backup RPC's to that replica at all. We can change the cache HRL nulling behavior as it is, but keep an int for max replicaId in the cache if you think that would be better design. the condition is always true here It is inside a while loop. DoNotRetryIOException means "logic error" usually... I see. If in everycase a DoNotRetryIOException, there is a logic error, I'll also rethrow this from primary without scheduling replicas. But you agree with the RetriesExhausted to be tried on every server no matter what, right ?
          Hide
          Nicolas Liochon added a comment -

          So locateRegionInMeta() call MIGHT return HRL

          Hum, then we can have an issue here I think

            private RegionLocations findDestLocation(
                TableName tableName, Row row, boolean checkPrimary) throws IOException {
              if (row == null) throw new IllegalArgumentException("#" + id + ", row cannot be null");
              RegionLocations loc = hConnection.locateRegionAll(tableName, row.getRow());
              if (loc == null
                  || (checkPrimary && (loc.isEmpty()
                  || loc.getDefaultRegionLocation() == null
                  || loc.getDefaultRegionLocation().getServerName() == null))) {
                throw new IOException("#" + id + ", no location found, aborting submit for" +
                    " tableName=" + tableName + " rowkey=" + Arrays.toString(row.getRow()));
              }
              return loc;
            }
          

          The cache might return something with a null server name, w/o retrying to go to meta. The caller will get the exception, and will think "after a lot of retry we can't get the location, so we're bad, so we stop"
          I'm not totally sure I'm right, because we're not looking for the secondary replicas here.

          It is inside a while loop.

          Not the first one for the main replica

          But you agree with the RetriesExhausted to be tried on every server no matter what, right ?

          It so extreme that I don't really know. I suppose that whatever you do it's going to be difficult at the end . I'm +1 whatever the final choice here.

          Show
          Nicolas Liochon added a comment - So locateRegionInMeta() call MIGHT return HRL Hum, then we can have an issue here I think private RegionLocations findDestLocation( TableName tableName, Row row, boolean checkPrimary) throws IOException { if (row == null ) throw new IllegalArgumentException( "#" + id + ", row cannot be null " ); RegionLocations loc = hConnection.locateRegionAll(tableName, row.getRow()); if (loc == null || (checkPrimary && (loc.isEmpty() || loc.getDefaultRegionLocation() == null || loc.getDefaultRegionLocation().getServerName() == null ))) { throw new IOException( "#" + id + ", no location found, aborting submit for " + " tableName=" + tableName + " rowkey=" + Arrays.toString(row.getRow())); } return loc; } The cache might return something with a null server name, w/o retrying to go to meta. The caller will get the exception, and will think "after a lot of retry we can't get the location, so we're bad, so we stop" I'm not totally sure I'm right, because we're not looking for the secondary replicas here. It is inside a while loop. Not the first one for the main replica But you agree with the RetriesExhausted to be tried on every server no matter what, right ? It so extreme that I don't really know. I suppose that whatever you do it's going to be difficult at the end . I'm +1 whatever the final choice here.
          Hide
          Enis Soztutar added a comment -

          Thanks Nicolas for the careful review.

          I've changed the patch so that I dropped the approach or using HRL's with null ServerNames. Instead we still set the HRL item as null inside RegionLocations. RegionLocations now, can contain null elements at the tail of the array as well. This enables the cache to know about how many replicas there are, but the locations might still be unknown.

          I've been testing this with

          hbase org.apache.hadoop.hbase.test.IntegrationTestTimeBoundedRequestsWithRegionReplicas -Dhbase.IntegrationTestTimeBoundedRequestsWithRegionReplicas.runtime=600000 -DIntegrationTestTimeBoundedRequestsWithRegionReplicas.num_write_threads=30 -DIntegrationTestTimeBoundedRequestsWithRegionReplicas.region_replication=3 -DIntegrationTestTimeBoundedRequestsWithRegionReplicas.num_read_threads=30 -Dhbase.ipc.client.allowsInterrupt=true
          

          it seems the issues are fixed. However, I notice that the test most of the time dies with OOM, cannot create native thread, because the number of threads grow unbounded (north of 4K).
          Tried setting -Dhbase.hconnection.threads.max=512 with no results so far.

          One other issue (probably related) was that the RPC's would not start for a long time and timeout the gets (10-20 secs) because the thread pool executor does not schedule the tasks in the CompletionService from RpcRetryingCallerWithReadReplicas. Do you have any opinion around this? Should we create a secondary pool for the backup requests? If we address the thread growing problem, probably this will be fixed as well.

          The v3 patch also addresses your comments, except for the DoNotRetryEx. We'll have to get this running consistently before addressing that I think.

          Show
          Enis Soztutar added a comment - Thanks Nicolas for the careful review. I've changed the patch so that I dropped the approach or using HRL's with null ServerNames. Instead we still set the HRL item as null inside RegionLocations. RegionLocations now, can contain null elements at the tail of the array as well. This enables the cache to know about how many replicas there are, but the locations might still be unknown. I've been testing this with hbase org.apache.hadoop.hbase.test.IntegrationTestTimeBoundedRequestsWithRegionReplicas -Dhbase.IntegrationTestTimeBoundedRequestsWithRegionReplicas.runtime=600000 -DIntegrationTestTimeBoundedRequestsWithRegionReplicas.num_write_threads=30 -DIntegrationTestTimeBoundedRequestsWithRegionReplicas.region_replication=3 -DIntegrationTestTimeBoundedRequestsWithRegionReplicas.num_read_threads=30 -Dhbase.ipc.client.allowsInterrupt= true it seems the issues are fixed. However, I notice that the test most of the time dies with OOM, cannot create native thread, because the number of threads grow unbounded (north of 4K). Tried setting -Dhbase.hconnection.threads.max=512 with no results so far. One other issue (probably related) was that the RPC's would not start for a long time and timeout the gets (10-20 secs) because the thread pool executor does not schedule the tasks in the CompletionService from RpcRetryingCallerWithReadReplicas. Do you have any opinion around this? Should we create a secondary pool for the backup requests? If we address the thread growing problem, probably this will be fixed as well. The v3 patch also addresses your comments, except for the DoNotRetryEx. We'll have to get this running consistently before addressing that I think.
          Hide
          Devaraj Das added a comment -

          I have been playing with this patch as part of working on HBASE-10634. I reviewed it as well (though not that deep). +1 for committing to the branch and doing the other fixes (to do with thread scheduling) as followup.

          Show
          Devaraj Das added a comment - I have been playing with this patch as part of working on HBASE-10634 . I reviewed it as well (though not that deep). +1 for committing to the branch and doing the other fixes (to do with thread scheduling) as followup.
          Hide
          Enis Soztutar added a comment -

          Thanks Devaraj for the review.

          It so extreme that I don't really know. I suppose that whatever you do it's going to be difficult at the end . I'm +1 whatever the final choice here.

          I think it is safer to send even the DoNotRetryIOException to replicas. If it becomes a problem to wait for all results from replicas, we can fix it later.

          However, I notice that the test most of the time dies with OOM, cannot create native thread, because the number of threads grow unbounded

          One cause for the # threads to jump was that, meta's own location is not cached, resulting in a zk request for every region location cache miss. In the test we are doing 12K req/s from a single client, and with CM, we do a LOT of zk requests causing multi second slowdowns because of zk contention. HBASE-10785 attacks this issue.
          Nicolas Liochon I'll commit v3 if you are ok with it.

          Show
          Enis Soztutar added a comment - Thanks Devaraj for the review. It so extreme that I don't really know. I suppose that whatever you do it's going to be difficult at the end . I'm +1 whatever the final choice here. I think it is safer to send even the DoNotRetryIOException to replicas. If it becomes a problem to wait for all results from replicas, we can fix it later. However, I notice that the test most of the time dies with OOM, cannot create native thread, because the number of threads grow unbounded One cause for the # threads to jump was that, meta's own location is not cached, resulting in a zk request for every region location cache miss. In the test we are doing 12K req/s from a single client, and with CM, we do a LOT of zk requests causing multi second slowdowns because of zk contention. HBASE-10785 attacks this issue. Nicolas Liochon I'll commit v3 if you are ok with it.
          Hide
          Enis Soztutar added a comment -

          While testing this more, we've encountered some problems on the write side when region locations are changing (with region replicas) via the balancer. This was because we were not guaranteeing that HCI.locateRegionInMeta() will always return a result containing the HRL for the replicaId sent with the call. A simple fix to check whether the cached RegionLocations object contains that replicaId ensures that we do not return cached results and go to meta, if the location for asked replicaId is null in cache. Patch v4 fixes this.

          2014-03-19 12:57:49,532|beaver.machine|INFO|2014-03-19 12:57:49,529 ERROR HBaseWriterThread_24 client.AsyncProcess: Failed to get region location
          2014-03-19 12:57:49,533|beaver.machine|INFO|java.io.IOException: #58, no location found, aborting submit for tableName=IntegrationTestTimeBoundedRequestsWithRegionReplicas rowkey=[48, 98, 100, 52, 102, 48, 100, 51, 54, 50, 102, 101, 102, 48, 49, 48, 49, 53, 48, 54, 102, 98, 98, 99, 99, 50, 97, 54, 100, 55, 50, 50, 45, 51, 56, 54, 51, 54, 57]
          2014-03-19 12:57:49,533|beaver.machine|INFO|at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:419)
          2014-03-19 12:57:49,534|beaver.machine|INFO|at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:341)
          2014-03-19 12:57:49,534|beaver.machine|INFO|at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:294)
          2014-03-19 12:57:49,534|beaver.machine|INFO|at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:1020)
          2014-03-19 12:57:49,535|beaver.machine|INFO|at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1294)
          2014-03-19 12:57:49,535|beaver.machine|INFO|at org.apache.hadoop.hbase.client.HTable.put(HTable.java:955)
          2014-03-19 12:57:49,535|beaver.machine|INFO|at org.apache.hadoop.hbase.util.MultiThreadedWriter$HBaseWriterThread.insert(MultiThreadedWriter.java:143)
          2014-03-19 12:57:49,536|beaver.machine|INFO|at org.apache.hadoop.hbase.util.MultiThreadedWriter$HBaseWriterThread.run(MultiThreadedWriter.java:108)
          
          Show
          Enis Soztutar added a comment - While testing this more, we've encountered some problems on the write side when region locations are changing (with region replicas) via the balancer. This was because we were not guaranteeing that HCI.locateRegionInMeta() will always return a result containing the HRL for the replicaId sent with the call. A simple fix to check whether the cached RegionLocations object contains that replicaId ensures that we do not return cached results and go to meta, if the location for asked replicaId is null in cache. Patch v4 fixes this. 2014-03-19 12:57:49,532|beaver.machine|INFO|2014-03-19 12:57:49,529 ERROR HBaseWriterThread_24 client.AsyncProcess: Failed to get region location 2014-03-19 12:57:49,533|beaver.machine|INFO|java.io.IOException: #58, no location found, aborting submit for tableName=IntegrationTestTimeBoundedRequestsWithRegionReplicas rowkey=[48, 98, 100, 52, 102, 48, 100, 51, 54, 50, 102, 101, 102, 48, 49, 48, 49, 53, 48, 54, 102, 98, 98, 99, 99, 50, 97, 54, 100, 55, 50, 50, 45, 51, 56, 54, 51, 54, 57] 2014-03-19 12:57:49,533|beaver.machine|INFO|at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:419) 2014-03-19 12:57:49,534|beaver.machine|INFO|at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:341) 2014-03-19 12:57:49,534|beaver.machine|INFO|at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:294) 2014-03-19 12:57:49,534|beaver.machine|INFO|at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:1020) 2014-03-19 12:57:49,535|beaver.machine|INFO|at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1294) 2014-03-19 12:57:49,535|beaver.machine|INFO|at org.apache.hadoop.hbase.client.HTable.put(HTable.java:955) 2014-03-19 12:57:49,535|beaver.machine|INFO|at org.apache.hadoop.hbase.util.MultiThreadedWriter$HBaseWriterThread.insert(MultiThreadedWriter.java:143) 2014-03-19 12:57:49,536|beaver.machine|INFO|at org.apache.hadoop.hbase.util.MultiThreadedWriter$HBaseWriterThread.run(MultiThreadedWriter.java:108)
          Hide
          Nicolas Liochon added a comment -

          You have not uploaded the v4?
          But I'm ok with the principle mentions on the v3

          Show
          Nicolas Liochon added a comment - You have not uploaded the v4? But I'm ok with the principle mentions on the v3
          Hide
          Enis Soztutar added a comment -

          Good catch v4 attached.

          Show
          Enis Soztutar added a comment - Good catch v4 attached.
          Hide
          Nicolas Liochon added a comment -

          I reviewed v4
          I'm +1, just a nit that can be fixed on commit:

                    Future<Result> f = cs.take();
                    if (f != null) {
                      return f.get(); // great we got an answer
                    }
          

          In the secondaries part, I don't think that cs.take can return null.

          Show
          Nicolas Liochon added a comment - I reviewed v4 I'm +1, just a nit that can be fixed on commit: Future<Result> f = cs.take(); if (f != null ) { return f.get(); // great we got an answer } In the secondaries part, I don't think that cs.take can return null.
          Hide
          Enis Soztutar added a comment -

          Thanks Nicolas! I've fixed the nit in v5. Gonna commit this one.

          Show
          Enis Soztutar added a comment - Thanks Nicolas! I've fixed the nit in v5. Gonna commit this one.
          Hide
          Enis Soztutar added a comment -

          Committed to branch hbase-10070.

          Show
          Enis Soztutar added a comment - Committed to branch hbase-10070.
          Hide
          Enis Soztutar added a comment -

          Attaching rebased patch for master that is committed

          Show
          Enis Soztutar added a comment - Attaching rebased patch for master that is committed
          Hide
          Enis Soztutar added a comment -

          Committed to master as part of hbase-10070 branch merge

          Show
          Enis Soztutar added a comment - Committed to master as part of hbase-10070 branch merge
          Hide
          Hudson added a comment -

          FAILURE: Integrated in HBase-TRUNK #5245 (See https://builds.apache.org/job/HBase-TRUNK/5245/)
          HBASE-10701 Cache invalidation improvements from client side (enis: rev ad05de172f4df735c56f83b0d590724603b3c2e9)

          • hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java
          • hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedReader.java
          • hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
          • hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterConnection.java
          • hbase-common/src/main/java/org/apache/hadoop/hbase/util/BoundedCompletionService.java
          • hbase-common/src/main/java/org/apache/hadoop/hbase/util/Threads.java
          • hbase-client/src/main/java/org/apache/hadoop/hbase/RegionLocations.java
          • hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriter.java
          • hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionAdapter.java
          • hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaCache.java
          • hbase-server/src/main/java/org/apache/hadoop/hbase/client/CoprocessorHConnection.java
          • hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriterBase.java
          • hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java
          • hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java
          • hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
          • hbase-client/src/test/java/org/apache/hadoop/hbase/TestRegionLocations.java
          Show
          Hudson added a comment - FAILURE: Integrated in HBase-TRUNK #5245 (See https://builds.apache.org/job/HBase-TRUNK/5245/ ) HBASE-10701 Cache invalidation improvements from client side (enis: rev ad05de172f4df735c56f83b0d590724603b3c2e9) hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedReader.java hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterConnection.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/BoundedCompletionService.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/Threads.java hbase-client/src/main/java/org/apache/hadoop/hbase/RegionLocations.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriter.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionAdapter.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaCache.java hbase-server/src/main/java/org/apache/hadoop/hbase/client/CoprocessorHConnection.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriterBase.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java hbase-client/src/test/java/org/apache/hadoop/hbase/TestRegionLocations.java
          Hide
          Enis Soztutar added a comment -

          Closing this issue after 0.99.0 release.

          Show
          Enis Soztutar added a comment - Closing this issue after 0.99.0 release.

            People

            • Assignee:
              Enis Soztutar
              Reporter:
              Enis Soztutar
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development