Details

      Description

      This was raised by the Phoenix team. During a profiling session we noticed that catching the joinedHeap up to the current rows via seek causes a performance regression, which makes the joinedHeap only efficient when either a high or low percentage is matched by the filter.
      (High is fine, because the joinedHeap will not get behind as often and does not need to be caught up, low is fine, because the seek isn't happening frequently).

      In our tests we found that the solution is quite simple: Replace seek with reseek. Patch coming soon.

      1. 8316-0.94.txt
        0.9 kB
        Lars Hofhansl
      2. 8316-trunk.txt
        1 kB
        Ted Yu
      3. 8316-trunk.txt
        1 kB
        Ted Yu
      4. FDencode.png
        90 kB
        Lars Hofhansl
      5. noencode.png
        90 kB
        Lars Hofhansl

        Activity

        Hide
        Lars Hofhansl added a comment -

        I ran TestJoinedScanners that Ted attached to parent in 0.94. In every case was reseek equivalent or faster than a seek.
        The reseek case will be better if the joinedHeap gets behind but the current heap row is still in the same block. TestJoinedScanners tests with quite large (128k) values, so it is very unlikely that current heap row is in the same block as the joinedHeap row, so there's hardly any improvement. The key is that there was no slow down either.
        In the Phoenix case we smaller column families so it is more likely that the next row of the heap is still in the same block, hence we see a dramatic improvement.

        The Phoenix will probably have concrete numbers tomorrow.

        Show
        Lars Hofhansl added a comment - I ran TestJoinedScanners that Ted attached to parent in 0.94. In every case was reseek equivalent or faster than a seek. The reseek case will be better if the joinedHeap gets behind but the current heap row is still in the same block. TestJoinedScanners tests with quite large (128k) values, so it is very unlikely that current heap row is in the same block as the joinedHeap row, so there's hardly any improvement. The key is that there was no slow down either. In the Phoenix case we smaller column families so it is more likely that the next row of the heap is still in the same block, hence we see a dramatic improvement. The Phoenix will probably have concrete numbers tomorrow.
        Hide
        Lars Hofhansl added a comment -

        Here's the trivial patch.

        Show
        Lars Hofhansl added a comment - Here's the trivial patch.
        Hide
        Ted Yu added a comment -

        How about this one ?

        It aligns with RegionScannerImpl#reseek()

        Show
        Ted Yu added a comment - How about this one ? It aligns with RegionScannerImpl#reseek()
        Hide
        Lars Hofhansl added a comment -

        Trunk patch.
        I can't see how this can break anything (if we do we have bigger problem, as that means that the joinedHeap got ahead of the regular heap, which would be a disaster), but let's have a HadoopQA run.

        Show
        Lars Hofhansl added a comment - Trunk patch. I can't see how this can break anything (if we do we have bigger problem, as that means that the joinedHeap got ahead of the regular heap, which would be a disaster), but let's have a HadoopQA run.
        Hide
        Lars Hofhansl added a comment -

        But in this case we already have a KeyValueHelp, which has reseek. The initial seek could have also been expressed by requestSeek. I prefer reseek, as it is easier to see what happens.

        Show
        Lars Hofhansl added a comment - But in this case we already have a KeyValueHelp, which has reseek. The initial seek could have also been expressed by requestSeek. I prefer reseek, as it is easier to see what happens.
        Hide
        Lars Hofhansl added a comment -

        Also I think we do not want to use bloom filters, as we are positioning a scanner (reseek makes that clear, and also does not use bloom filters).

        Show
        Lars Hofhansl added a comment - Also I think we do not want to use bloom filters, as we are positioning a scanner (reseek makes that clear, and also does not use bloom filters).
        Hide
        Ted Yu added a comment -

        Here is the javadoc for requestSeek():

           * Similar to {@link #seek} (or {@link #reseek} if forward is true) but only
           * does a seek operation after checking that it is really necessary for the
           * row/column combination specified by the kv parameter. This function was
           * added to avoid unnecessary disk seeks by checking row-column Bloom filters
           * before a seek on multi-column get/scan queries, and to optimize by looking
           * up more recent files first.
        

        Looks like requestSeek() should perform better.

        Show
        Ted Yu added a comment - Here is the javadoc for requestSeek(): * Similar to {@link #seek} (or {@link #reseek} if forward is true ) but only * does a seek operation after checking that it is really necessary for the * row/column combination specified by the kv parameter. This function was * added to avoid unnecessary disk seeks by checking row-column Bloom filters * before a seek on multi-column get/scan queries, and to optimize by looking * up more recent files first. Looks like requestSeek() should perform better.
        Hide
        Lars Hofhansl added a comment -

        Hmm... OK. Looked at the code. I buy that. Let's do that instead.

        Show
        Lars Hofhansl added a comment - Hmm... OK. Looked at the code. I buy that. Let's do that instead.
        Hide
        Lars Hofhansl added a comment -

        Even the bloom filter is fine, it'll help with file selection, it seems.

        Show
        Lars Hofhansl added a comment - Even the bloom filter is fine, it'll help with file selection, it seems.
        Hide
        Lars Hofhansl added a comment -

        precommit picked up my change

        Show
        Lars Hofhansl added a comment - precommit picked up my change
        Hide
        Ted Yu added a comment -

        That's Okay.

        Attaching my patch, again.

        Show
        Ted Yu added a comment - That's Okay. Attaching my patch, again.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12577958/8316-0.96.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        +1 site. The mvn site goal succeeds with this patch.

        +1 core tests. The patch passed unit tests in .

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12577958/8316-0.96.txt against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5239//console This message is automatically generated.
        Hide
        Lars Hofhansl added a comment -

        Looking good. Will await the perf numbers from Phoenix (James Taylor), and then commit if all looks right.
        (Don't think the tests have to be redone with requestSeek specifically, as it will only be better than reseek)

        Show
        Lars Hofhansl added a comment - Looking good. Will await the perf numbers from Phoenix ( James Taylor ), and then commit if all looks right. (Don't think the tests have to be redone with requestSeek specifically, as it will only be better than reseek)
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12577962/8316-trunk.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 lineLengths. The patch introduces lines longer than 100

        +1 site. The mvn site goal succeeds with this patch.

        +1 core tests. The patch passed unit tests in .

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12577962/8316-trunk.txt against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces lines longer than 100 +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5240//console This message is automatically generated.
        Hide
        Anoop Sam John added a comment -

        Ya. requestSeek() in this case will help with HFile's max timestamp based lazy seeks. There won't be any bloom usage here in this case.
        +1

        Show
        Anoop Sam John added a comment - Ya. requestSeek() in this case will help with HFile's max timestamp based lazy seeks. There won't be any bloom usage here in this case. +1
        Hide
        Lars Hofhansl added a comment -

        Seems like this should be assigned to me.

        Show
        Lars Hofhansl added a comment - Seems like this should be assigned to me.
        Hide
        Ted Yu added a comment -

        I am fine either way.

        Show
        Ted Yu added a comment - I am fine either way.
        Hide
        Lars Hofhansl added a comment -

        Benchmark results (replacing seek with reseek, not requestSeek, but that will only make it better).
        The dotted line were the result before the change.

        Show
        Lars Hofhansl added a comment - Benchmark results (replacing seek with reseek, not requestSeek, but that will only make it better). The dotted line were the result before the change.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12578084/noencode.png
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5251//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12578084/noencode.png against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5251//console This message is automatically generated.
        Hide
        Ted Yu added a comment -

        Nice charts.

        Serial means normal scan while parallel means issuing scan requests against multiple regions, I guess.

        Show
        Ted Yu added a comment - Nice charts. Serial means normal scan while parallel means issuing scan requests against multiple regions, I guess.
        Hide
        Lars Hofhansl added a comment -

        Yeah sorry. Should have explained more. Serial is single threaded, parallel means use Phoenix' logic to parallelize a query across Regions.
        The main take-away from the graph is that with this change there is no scenario where enabling Essential CFs causes a slowdown.

        Going to commit (with Ted's enhancements) in a few, unless I hear objections.

        Show
        Lars Hofhansl added a comment - Yeah sorry. Should have explained more. Serial is single threaded, parallel means use Phoenix' logic to parallelize a query across Regions. The main take-away from the graph is that with this change there is no scenario where enabling Essential CFs causes a slowdown. Going to commit (with Ted's enhancements) in a few, unless I hear objections.
        Hide
        Lars Hofhansl added a comment -

        Committed to 0.94, 0.95, and 0.98.
        Thanks for looking and improving, Ted!

        Show
        Lars Hofhansl added a comment - Committed to 0.94, 0.95, and 0.98. Thanks for looking and improving, Ted!
        Hide
        Ted Yu added a comment -

        @Lars:
        Can you illustrate the ratio between the sizes of the narrow and wide column families ?

        Thanks

        Show
        Ted Yu added a comment - @Lars: Can you illustrate the ratio between the sizes of the narrow and wide column families ? Thanks
        Hide
        Hudson added a comment -

        Integrated in HBase-0.94 #954 (See https://builds.apache.org/job/HBase-0.94/954/)
        HBASE-8316 JoinedHeap for non essential column families should reseek instead of seek (Revision 1466708)

        Result = SUCCESS
        larsh :
        Files :

        • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        Show
        Hudson added a comment - Integrated in HBase-0.94 #954 (See https://builds.apache.org/job/HBase-0.94/954/ ) HBASE-8316 JoinedHeap for non essential column families should reseek instead of seek (Revision 1466708) Result = SUCCESS larsh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        Hide
        Lars Hofhansl added a comment -

        In TestJoinedScanner I tried with 1k and 128k large CFs.

        For the Phoenix test:

        • The small CF has one column of 8 bytes
        • the large CF has three columns (200 bytes, 200 bytes, 8 bytes - 408 bytes in total)
        Show
        Lars Hofhansl added a comment - In TestJoinedScanner I tried with 1k and 128k large CFs. For the Phoenix test: The small CF has one column of 8 bytes the large CF has three columns (200 bytes, 200 bytes, 8 bytes - 408 bytes in total)
        Hide
        Hudson added a comment -

        Integrated in hbase-0.95 #139 (See https://builds.apache.org/job/hbase-0.95/139/)
        HBASE-8316 JoinedHeap for non essential column families should reseek instead of seek (Revision 1466712)

        Result = SUCCESS
        larsh :
        Files :

        • /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        Show
        Hudson added a comment - Integrated in hbase-0.95 #139 (See https://builds.apache.org/job/hbase-0.95/139/ ) HBASE-8316 JoinedHeap for non essential column families should reseek instead of seek (Revision 1466712) Result = SUCCESS larsh : Files : /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        Hide
        Hudson added a comment -

        Integrated in HBase-TRUNK #4051 (See https://builds.apache.org/job/HBase-TRUNK/4051/)
        HBASE-8316 JoinedHeap for non essential column families should reseek instead of seek (Revision 1466711)

        Result = SUCCESS
        larsh :
        Files :

        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        Show
        Hudson added a comment - Integrated in HBase-TRUNK #4051 (See https://builds.apache.org/job/HBase-TRUNK/4051/ ) HBASE-8316 JoinedHeap for non essential column families should reseek instead of seek (Revision 1466711) Result = SUCCESS larsh : Files : /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        Hide
        Hudson added a comment -

        Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #492 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/492/)
        HBASE-8316 JoinedHeap for non essential column families should reseek instead of seek (Revision 1466711)

        Result = FAILURE
        larsh :
        Files :

        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        Show
        Hudson added a comment - Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #492 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/492/ ) HBASE-8316 JoinedHeap for non essential column families should reseek instead of seek (Revision 1466711) Result = FAILURE larsh : Files : /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        Hide
        Hudson added a comment -

        Integrated in hbase-0.95-on-hadoop2 #65 (See https://builds.apache.org/job/hbase-0.95-on-hadoop2/65/)
        HBASE-8316 JoinedHeap for non essential column families should reseek instead of seek (Revision 1466712)

        Result = FAILURE
        larsh :
        Files :

        • /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        Show
        Hudson added a comment - Integrated in hbase-0.95-on-hadoop2 #65 (See https://builds.apache.org/job/hbase-0.95-on-hadoop2/65/ ) HBASE-8316 JoinedHeap for non essential column families should reseek instead of seek (Revision 1466712) Result = FAILURE larsh : Files : /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        Hide
        Hudson added a comment -

        Integrated in HBase-0.94-security #134 (See https://builds.apache.org/job/HBase-0.94-security/134/)
        HBASE-8316 JoinedHeap for non essential column families should reseek instead of seek (Revision 1466708)

        Result = FAILURE
        larsh :
        Files :

        • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        Show
        Hudson added a comment - Integrated in HBase-0.94-security #134 (See https://builds.apache.org/job/HBase-0.94-security/134/ ) HBASE-8316 JoinedHeap for non essential column families should reseek instead of seek (Revision 1466708) Result = FAILURE larsh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java

          People

          • Assignee:
            Lars Hofhansl
            Reporter:
            Lars Hofhansl
          • Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development