HBase
  1. HBase
  2. HBASE-7320 Remove KeyValue.getBuffer()
  3. HBASE-10531

Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

    Details

    • Type: Sub-task Sub-task
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.99.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Currently the byte[] key passed to HFileScanner.seekTo and HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts. And the caller forms this by using kv.getBuffer, which is actually deprecated. So see how this can be achieved considering kv.getBuffer is removed.

      1. HBASE-10531.patch
        6 kB
        ramkrishna.s.vasudevan
      2. HBASE-10531_1.patch
        107 kB
        ramkrishna.s.vasudevan
      3. HBASE-10531_2.patch
        113 kB
        ramkrishna.s.vasudevan
      4. HBASE-10531_3.patch
        115 kB
        ramkrishna.s.vasudevan
      5. HBASE-10531_4.patch
        119 kB
        ramkrishna.s.vasudevan
      6. HBASE-10531_5.patch
        118 kB
        ramkrishna.s.vasudevan
      7. HBASE-10531_6.patch
        126 kB
        ramkrishna.s.vasudevan
      8. HBASE-10531_7.patch
        130 kB
        ramkrishna.s.vasudevan
      9. HBASE-10531_8.patch
        131 kB
        ramkrishna.s.vasudevan
      10. HBASE-10531_9.patch
        132 kB
        ramkrishna.s.vasudevan
      11. HBASE-10531_12.patch
        146 kB
        ramkrishna.s.vasudevan
      12. HBASE-10531_13.patch
        147 kB
        ramkrishna.s.vasudevan
      13. HBASE-10531_13.patch
        147 kB
        ramkrishna.s.vasudevan

        Activity

        ramkrishna.s.vasudevan created issue -
        ramkrishna.s.vasudevan made changes -
        Field Original Value New Value
        Fix Version/s 0.99.0 [ 12325675 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        Can we just replace the seekTo, reseekTo, seekBefore with an api accepting just a Cell and deprecate others. This would ensure that from the underlying byte buffers (what ever format it could be) I could just create a cell from that, it may be based on the codec, and create a cell out of it. Then from the cell extract the individuals from that and do the comparison.

         @Deprecated
          int seekTo(byte[] key) throws IOException;
          @Deprecated
          int seekTo(byte[] key, int offset, int length) throws IOException;
        
          int seekTo(Cell kv) throws IOException;
        
          @Deprecated
          int reseekTo(byte[] key) throws IOException;
          @Deprecated
          int reseekTo(byte[] key, int offset, int length) throws IOException;
        
          int reseekTo(Cell kv) throws IOException;
        
         @Deprecated
          boolean seekBefore(byte[] key) throws IOException;
          @Deprecated
          boolean seekBefore(byte[] key, int offset, int length) throws IOException;
        
          boolean seekBefore(Cell kv) throws IOException;
        
        Show
        ramkrishna.s.vasudevan added a comment - Can we just replace the seekTo, reseekTo, seekBefore with an api accepting just a Cell and deprecate others. This would ensure that from the underlying byte buffers (what ever format it could be) I could just create a cell from that, it may be based on the codec, and create a cell out of it. Then from the cell extract the individuals from that and do the comparison. @Deprecated int seekTo( byte [] key) throws IOException; @Deprecated int seekTo( byte [] key, int offset, int length) throws IOException; int seekTo(Cell kv) throws IOException; @Deprecated int reseekTo( byte [] key) throws IOException; @Deprecated int reseekTo( byte [] key, int offset, int length) throws IOException; int reseekTo(Cell kv) throws IOException; @Deprecated boolean seekBefore( byte [] key) throws IOException; @Deprecated boolean seekBefore( byte [] key, int offset, int length) throws IOException; boolean seekBefore(Cell kv) throws IOException;
        Hide
        Matt Corgan added a comment -

        It seems right to me, but do you know why those methods took byte[] vs KeyValue to begin with?

        Show
        Matt Corgan added a comment - It seems right to me, but do you know why those methods took byte[] vs KeyValue to begin with?
        Hide
        ramkrishna.s.vasudevan added a comment -

        May be because the value was not really needed in this comparison that happens on the block, and hence only the key part was extracted out and used in these APIs.

        Show
        ramkrishna.s.vasudevan added a comment - May be because the value was not really needed in this comparison that happens on the block, and hence only the key part was extracted out and used in these APIs.
        Hide
        stack added a comment -

        lgtm

        Internally, we take the byte array, presume it a serialized KeyValue and then reconstitute the row for a compare over in a method named compareFlatKey.

        Show
        stack added a comment - lgtm Internally, we take the byte array, presume it a serialized KeyValue and then reconstitute the row for a compare over in a method named compareFlatKey.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Patch that deprecates and adds new seekTo, reseekTo and seekBefore APIs.

        Show
        ramkrishna.s.vasudevan added a comment - Patch that deprecates and adds new seekTo, reseekTo and seekBefore APIs.
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531.patch [ 12629742 ]
        Hide
        Nick Dimiduk added a comment -

        Patch looks good to me. I'd rename Cell kv to Cell key so it's clear to the reader that only the key part is considered.

        Show
        Nick Dimiduk added a comment - Patch looks good to me. I'd rename Cell kv to Cell key so it's clear to the reader that only the key part is considered.
        Hide
        Anoop Sam John added a comment -

        HFileScanner is @InterfaceAudience.Private Still we have to do deprecate and then add new API as overloaded?
        It will be better to add the alternate API to use along with @Deprecated.

        nit : There are white spaces in the patch.

        +      public int seekTo(Cell kv) throws IOException {
        +        KeyValue keyValue = KeyValueUtil.ensureKeyValue(kv);
        +        return seekTo(keyValue.getBuffer(), keyValue.getOffset(), keyValue.getLength());
        +      }
        

        You will avoid the refercence to keyValue.getBuffer() in coming patches?

        In the code we still refer to deprecated API. Better we can use the new API now.

        Show
        Anoop Sam John added a comment - HFileScanner is @InterfaceAudience.Private Still we have to do deprecate and then add new API as overloaded? It will be better to add the alternate API to use along with @Deprecated. nit : There are white spaces in the patch. + public int seekTo(Cell kv) throws IOException { + KeyValue keyValue = KeyValueUtil.ensureKeyValue(kv); + return seekTo(keyValue.getBuffer(), keyValue.getOffset(), keyValue.getLength()); + } You will avoid the refercence to keyValue.getBuffer() in coming patches? In the code we still refer to deprecated API. Better we can use the new API now.
        Hide
        Andrew Purtell added a comment -

        It would be great if we can do an exorcism of private interfaces rather than deprecate them. We did this for KeyValue comparators between 0.96 and 0.98. I get that folks like Phoenix want some measure of internal interface stability, but until 1.0 if the cost is adding technical debt in the form of littering the code with half-broken or fully broken deprecated APIs, it is not worth it.

        Show
        Andrew Purtell added a comment - It would be great if we can do an exorcism of private interfaces rather than deprecate them. We did this for KeyValue comparators between 0.96 and 0.98. I get that folks like Phoenix want some measure of internal interface stability, but until 1.0 if the cost is adding technical debt in the form of littering the code with half-broken or fully broken deprecated APIs, it is not worth it.
        Hide
        stack added a comment -

        Agree w/ Andrew Purtell comment above

        What is going on below here? We add an API but then call the old? Shouldn't we be going the other way? The old calling the new?

        +      public int seekTo(Cell kv) throws IOException {
        +        KeyValue keyValue = KeyValueUtil.ensureKeyValue(kv);
        +        return seekTo(keyValue.getBuffer(), keyValue.getOffset(), keyValue.getLength());
        +      }
        
        Show
        stack added a comment - Agree w/ Andrew Purtell comment above What is going on below here? We add an API but then call the old? Shouldn't we be going the other way? The old calling the new? + public int seekTo(Cell kv) throws IOException { + KeyValue keyValue = KeyValueUtil.ensureKeyValue(kv); + return seekTo(keyValue.getBuffer(), keyValue.getOffset(), keyValue.getLength()); + }
        Hide
        ramkrishna.s.vasudevan added a comment -

        Shouldn't we be going the other way? The old calling the new?

        Should be.. I will change that and support with testcases that i works fine.

        Show
        ramkrishna.s.vasudevan added a comment - Shouldn't we be going the other way? The old calling the new? Should be.. I will change that and support with testcases that i works fine.
        Hide
        ramkrishna.s.vasudevan added a comment -

        The reason I did not add the new way of the API impl is because we may have to change the comparators and related code to go with this mode of working. So thought instead of giving an empty impl for the new API for now continue the old one.
        Okie, so let me change the required things to get this in as part of this JIRA only.

        Show
        ramkrishna.s.vasudevan added a comment - The reason I did not add the new way of the API impl is because we may have to change the comparators and related code to go with this mode of working. So thought instead of giving an empty impl for the new API for now continue the old one. Okie, so let me change the required things to get this in as part of this JIRA only.
        Hide
        ramkrishna.s.vasudevan added a comment -

        One of the major challenge in doing this is to use the SamePrefixComparator where it tries to compare using the common prefix. In our case the common prefix is not common for the individual components. For a plain seek/read it works fine. But every where we need to create Keyvalue.createKeyValueFromKey before using the cell way of comparison.

        Show
        ramkrishna.s.vasudevan added a comment - One of the major challenge in doing this is to use the SamePrefixComparator where it tries to compare using the common prefix. In our case the common prefix is not common for the individual components. For a plain seek/read it works fine. But every where we need to create Keyvalue.createKeyValueFromKey before using the cell way of comparison.
        Hide
        Andrew Purtell added a comment -

        One of the major challenge in doing this is to use the SamePrefixComparator where it tries to compare using the common prefix. In our case the common prefix is not common for the individual components.

        Can we simply document in SamePrefixComparator javadoc that it might be expensive because of internal allocations and copying.

        Show
        Andrew Purtell added a comment - One of the major challenge in doing this is to use the SamePrefixComparator where it tries to compare using the common prefix. In our case the common prefix is not common for the individual components. Can we simply document in SamePrefixComparator javadoc that it might be expensive because of internal allocations and copying.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Am trying to see if we can make things work using SamePrefixComparator.

        But every where we need to create Keyvalue.createKeyValueFromKey before using the cell way of comparison

        This could be a costly operation but if we need to avoid this then the byte[] way of storing the index keys should also be changed to Cells. But that should be done in a seperate JIRA i suppose.

        Show
        ramkrishna.s.vasudevan added a comment - Am trying to see if we can make things work using SamePrefixComparator. But every where we need to create Keyvalue.createKeyValueFromKey before using the cell way of comparison This could be a costly operation but if we need to avoid this then the byte[] way of storing the index keys should also be changed to Cells. But that should be done in a seperate JIRA i suppose.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Will come up with a WIP patch tomorrow. Some test case failures are there while using reseek with MetaComparator.

        Show
        ramkrishna.s.vasudevan added a comment - Will come up with a WIP patch tomorrow. Some test case failures are there while using reseek with MetaComparator.
        Hide
        ramkrishna.s.vasudevan added a comment -

        My testcase runs are not getting completed due to envrironment issues. Attaching a patch to show how this is done.
        Infact as I mentioned earlier doing KeyValue.createKeyFromKeyValue(), is doing unnecessary byte copies. We should think of avoiding them.
        Also the comparison should be done between two cells. The right hand side of the cell is generally the keys from the bytebuffer or the index keys. So if we can have a constructor that creates a cell from the bytebuffer without doing a copy it would be of great help here. I have not removed the old code path still. Will upload in RB also.

        Show
        ramkrishna.s.vasudevan added a comment - My testcase runs are not getting completed due to envrironment issues. Attaching a patch to show how this is done. Infact as I mentioned earlier doing KeyValue.createKeyFromKeyValue(), is doing unnecessary byte copies. We should think of avoiding them. Also the comparison should be done between two cells. The right hand side of the cell is generally the keys from the bytebuffer or the index keys. So if we can have a constructor that creates a cell from the bytebuffer without doing a copy it would be of great help here. I have not removed the old code path still. Will upload in RB also.
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_1.patch [ 12631486 ]
        ramkrishna.s.vasudevan made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12631486/HBASE-10531_1.patch
        against trunk revision .
        ATTACHMENT ID: 12631486

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 22 new or modified tests.

        +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

        +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

        -1 javadoc. The javadoc tool appears to have generated 4 warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        -1 findbugs. The patch appears to introduce 10 new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 lineLengths. The patch introduces the following lines longer than 100:
        + public static int findCommonPrefixInQualifierPart(Cell left, Cell right, int qualifierCommonPrefix) {
        + public static int compareRowsWithCommonFamilyPrefix(Cell left, Cell right, int familyCommonPrefix) {
        + public static int compareRowsWithQualifierFamilyPrefix(Cell left, Cell right, int qualCommonPrefix) {
        + if (leftCell.getFamilyLength() + leftCell.getQualifierLength() == 0 && leftCell.getTypeByte() == Type.Minimum.getCode()) {
        + if (rightCell.getFamilyLength() + rightCell.getQualifierLength() == 0 && rightCell.getTypeByte() == Type.Minimum.getCode()) {
        + return Bytes.compareTo(leftCell.getFamilyArray(), leftCell.getFamilyOffset(), leftCell.getFamilyLength(),
        + int diff = compareFamilies(leftCell.getFamilyArray(), leftCell.getFamilyOffset(), leftCell.getFamilyLength(),
        + if (key.getFamilyLength() + key.getQualifierLength() == 0 && key.getTypeByte() == Type.Minimum.getCode()) {
        + if (right.getFamilyLength() + right.getQualifierLength() == 0 && right.getTypeByte() == Type.Minimum.getCode()) {
        + return comparator.compare(key, KeyValue.createKeyValueFromKey(bb.array(), bb.arrayOffset(), bb.limit()));

        +1 site. The mvn site goal succeeds with this patch.

        +1 core tests. The patch passed unit tests in .

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12631486/HBASE-10531_1.patch against trunk revision . ATTACHMENT ID: 12631486 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 22 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. -1 javadoc . The javadoc tool appears to have generated 4 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 10 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + public static int findCommonPrefixInQualifierPart(Cell left, Cell right, int qualifierCommonPrefix) { + public static int compareRowsWithCommonFamilyPrefix(Cell left, Cell right, int familyCommonPrefix) { + public static int compareRowsWithQualifierFamilyPrefix(Cell left, Cell right, int qualCommonPrefix) { + if (leftCell.getFamilyLength() + leftCell.getQualifierLength() == 0 && leftCell.getTypeByte() == Type.Minimum.getCode()) { + if (rightCell.getFamilyLength() + rightCell.getQualifierLength() == 0 && rightCell.getTypeByte() == Type.Minimum.getCode()) { + return Bytes.compareTo(leftCell.getFamilyArray(), leftCell.getFamilyOffset(), leftCell.getFamilyLength(), + int diff = compareFamilies(leftCell.getFamilyArray(), leftCell.getFamilyOffset(), leftCell.getFamilyLength(), + if (key.getFamilyLength() + key.getQualifierLength() == 0 && key.getTypeByte() == Type.Minimum.getCode()) { + if (right.getFamilyLength() + right.getQualifierLength() == 0 && right.getTypeByte() == Type.Minimum.getCode()) { + return comparator.compare(key, KeyValue.createKeyValueFromKey(bb.array(), bb.arrayOffset(), bb.limit())); +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8833//console This message is automatically generated.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Ping !!!

        Show
        ramkrishna.s.vasudevan added a comment - Ping !!!
        Hide
        ramkrishna.s.vasudevan added a comment -

        If the way this patch proceed is fine, can raise further tasks to do better cell comparisons instead of doing byte copies for all the cells on the right hand side like the block keys, fake keys etc.

        Show
        ramkrishna.s.vasudevan added a comment - If the way this patch proceed is fine, can raise further tasks to do better cell comparisons instead of doing byte copies for all the cells on the right hand side like the block keys, fake keys etc.
        Hide
        Lars Hofhansl added a comment -
        Cell right = KeyValue.createKeyValueFromKey(current.keyBuffer, 0, current.keyLength);
        

        Eventually we should be able to have new implementation of Cell where we just the row/family/column/ts without copying anything (actually that is part of the goal).
        Until do that, though, it seems better to stay using keyBuffer directly, otherwise we'll add yet another copy to an already expensive part of the scanning.

        Show
        Lars Hofhansl added a comment - Cell right = KeyValue.createKeyValueFromKey(current.keyBuffer, 0, current.keyLength); Eventually we should be able to have new implementation of Cell where we just the row/family/column/ts without copying anything (actually that is part of the goal). Until do that, though, it seems better to stay using keyBuffer directly, otherwise we'll add yet another copy to an already expensive part of the scanning.
        Hide
        ramkrishna.s.vasudevan added a comment - - edited

        Eventually we should be able to have new implementation of Cell where we just the row/family/column/ts without copying anything (actually that is part of the goal).

        Yes Lars. That is mentioned in my comments. Currently we need to make a cell and make changes to the code to deal with Cells. That is why from the available key I made cell(which involves a copy).
        To do this we need some changes on some of the places in the code where the keys are getting created. So we need to change that.

        {Edit}

        : comment re phrased

        Show
        ramkrishna.s.vasudevan added a comment - - edited Eventually we should be able to have new implementation of Cell where we just the row/family/column/ts without copying anything (actually that is part of the goal). Yes Lars. That is mentioned in my comments. Currently we need to make a cell and make changes to the code to deal with Cells. That is why from the available key I made cell(which involves a copy). To do this we need some changes on some of the places in the code where the keys are getting created. So we need to change that. {Edit} : comment re phrased
        Hide
        ramkrishna.s.vasudevan added a comment - - edited

        otherwise we'll add yet another copy to an already expensive part of the scanning.

        I have a way to work around this. Now as we are creating a cell here for comparision, I will create a new KV here and that will not do any copy.

         public static class DerivedKeyValue extends KeyValue {
        
            private int length = 0;
            private int offset = 0;
            private byte[] b;
        
            public DerivedKeyValue(byte[] b, int offset, int length) {
              super(b,offset,length);
              this.b = b;
              setKeyOffset(offset);
              setKeyLength(length);
              this.length = length;
              this.offset = offset;
            }
        
            public void setKeyLength(int kLength) {
              this.length = kLength;
            }
        
            public void setKeyOffset(int kOffset) {
              this.offset = kOffset;
            }
        
            @Override
            public int getKeyOffset() {
                return this.offset;
            }
            
            @Override
            public byte[] getRowArray() {
              // TODO Auto-generated method stub
              return b;
            }
            
            @Override
            public int getRowOffset() {
              // TODO Auto-generated method stub
              return getKeyOffset() + Bytes.SIZEOF_SHORT;
            }
            
            @Override
            public byte[] getFamilyArray() {
              // TODO Auto-generated method stub
              return b;
            }
            
            @Override
            public byte getFamilyLength() {
              // TODO Auto-generated method stub
              return this.b[getFamilyOffset() - 1];
            }
            
            @Override
            public int getFamilyOffset() {
              // TODO Auto-generated method stub
              return this.offset  + Bytes.SIZEOF_SHORT + getRowLength() + Bytes.SIZEOF_BYTE;
            }
            
            @Override
            public byte[] getQualifierArray() {
              // TODO Auto-generated method stub
              return b;
            }
            
            @Override
            public int getQualifierLength() {
              // TODO Auto-generated method stub
              return getQualifierLength(getRowLength(),getFamilyLength());
            }
            
            @Override
            public int getQualifierOffset() {
              // TODO Auto-generated method stub
              return super.getQualifierOffset();
            }
            @Override
            public int getKeyLength() {
              // TODO Auto-generated method stub
              return length;
            }
            @Override
            public short getRowLength() {
              return Bytes.toShort(this.b, getKeyOffset());
            }
            
            private int getQualifierLength(int rlength, int flength) {
              return getKeyLength() - (int) getKeyDataStructureSize(rlength, flength, 0);
            }
        }
        

        Now here if you see the only difference between a normal Kv and the one craeted by KeyValue.createKeyValueFromKeyValue, we actually don't need the first 8 bytes(ROW_OFFSET). so by avoiding those bytes if we are able to implement our own getKeyLength, getRowOffset, etc we will be able to a proper comparison. Now we can compare the individual rows, families, qualifiers individually. What you think? so we avoid byte copy but we will create a new object. But I think that is going to be cheaper.
        So we can create a cell like

        Cell r = new KeyValue.DerivedKeyValue(arr[mid], 0, arr[mid].length);
        
        Show
        ramkrishna.s.vasudevan added a comment - - edited otherwise we'll add yet another copy to an already expensive part of the scanning. I have a way to work around this. Now as we are creating a cell here for comparision, I will create a new KV here and that will not do any copy. public static class DerivedKeyValue extends KeyValue { private int length = 0; private int offset = 0; private byte [] b; public DerivedKeyValue( byte [] b, int offset, int length) { super (b,offset,length); this .b = b; setKeyOffset(offset); setKeyLength(length); this .length = length; this .offset = offset; } public void setKeyLength( int kLength) { this .length = kLength; } public void setKeyOffset( int kOffset) { this .offset = kOffset; } @Override public int getKeyOffset() { return this .offset; } @Override public byte [] getRowArray() { // TODO Auto-generated method stub return b; } @Override public int getRowOffset() { // TODO Auto-generated method stub return getKeyOffset() + Bytes.SIZEOF_SHORT; } @Override public byte [] getFamilyArray() { // TODO Auto-generated method stub return b; } @Override public byte getFamilyLength() { // TODO Auto-generated method stub return this .b[getFamilyOffset() - 1]; } @Override public int getFamilyOffset() { // TODO Auto-generated method stub return this .offset + Bytes.SIZEOF_SHORT + getRowLength() + Bytes.SIZEOF_BYTE; } @Override public byte [] getQualifierArray() { // TODO Auto-generated method stub return b; } @Override public int getQualifierLength() { // TODO Auto-generated method stub return getQualifierLength(getRowLength(),getFamilyLength()); } @Override public int getQualifierOffset() { // TODO Auto-generated method stub return super .getQualifierOffset(); } @Override public int getKeyLength() { // TODO Auto-generated method stub return length; } @Override public short getRowLength() { return Bytes.toShort( this .b, getKeyOffset()); } private int getQualifierLength( int rlength, int flength) { return getKeyLength() - ( int ) getKeyDataStructureSize(rlength, flength, 0); } } Now here if you see the only difference between a normal Kv and the one craeted by KeyValue.createKeyValueFromKeyValue, we actually don't need the first 8 bytes(ROW_OFFSET). so by avoiding those bytes if we are able to implement our own getKeyLength, getRowOffset, etc we will be able to a proper comparison. Now we can compare the individual rows, families, qualifiers individually. What you think? so we avoid byte copy but we will create a new object. But I think that is going to be cheaper. So we can create a cell like Cell r = new KeyValue.DerivedKeyValue(arr[mid], 0, arr[mid].length);
        Hide
        Lars Hofhansl added a comment -

        Nice.

        Show
        Lars Hofhansl added a comment - Nice.
        ramkrishna.s.vasudevan made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        I have uploaded a patch that uses an inner class KeyAloneKeyValue that does not copy the bytes.
        I have not yet removed the duplicate code. Let me know what you think on this.

        Show
        ramkrishna.s.vasudevan added a comment - I have uploaded a patch that uses an inner class KeyAloneKeyValue that does not copy the bytes. I have not yet removed the duplicate code. Let me know what you think on this.
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10532_2.patch [ 12632810 ]
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10532_2.patch [ 12632810 ]
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_2.patch [ 12632812 ]
        Show
        ramkrishna.s.vasudevan added a comment - https://reviews.apache.org/r/18570/
        Hide
        ramkrishna.s.vasudevan added a comment -

        One interesting thing is running PerformanceEvaluation with 3 threads randomReads and sequentialReads the code with copy performs faster. (!!)

        Show
        ramkrishna.s.vasudevan added a comment - One interesting thing is running PerformanceEvaluation with 3 threads randomReads and sequentialReads the code with copy performs faster. (!!)
        Hide
        ramkrishna.s.vasudevan added a comment -

        We had an internal discussion on this
        As mentioned in my previous comments,
        Making KVs to cells makes more sense in the Read path. And to do that we need Cells in the readers and DBE seekers.
        There are places where we do comparison with the index keys (doing a binary search) and inside a block we compareflatkeys to see if we have gone past our expected keys.
        So when we are defining a cell which does not bother about how the key part looks like and does not need an api like getKey(), getKeyOffset, getKeyLength() then we are bound to depend on the cell structure. Hence we need to do our comparisons with cells on both sides. Hence raised HBASE-10680.
        Any more reviews on this? Thanks Lars for having a look at it.

        Show
        ramkrishna.s.vasudevan added a comment - We had an internal discussion on this As mentioned in my previous comments, Making KVs to cells makes more sense in the Read path. And to do that we need Cells in the readers and DBE seekers. There are places where we do comparison with the index keys (doing a binary search) and inside a block we compareflatkeys to see if we have gone past our expected keys. So when we are defining a cell which does not bother about how the key part looks like and does not need an api like getKey(), getKeyOffset, getKeyLength() then we are bound to depend on the cell structure. Hence we need to do our comparisons with cells on both sides. Hence raised HBASE-10680 . Any more reviews on this? Thanks Lars for having a look at it.
        Hide
        Lars Hofhansl added a comment -

        code with copy performs faster.

        That's unexpected. These both generate Get requests, right?

        I was looking at this code again. In SQM.getKeyForNextColumn and SQL.getKeyForNextRow make a brandnew KV (copy of key, FQ, CQ) just to pass down a search key.
        If we already create one of your derived KVs there we'd save yet another copy.

        Show
        Lars Hofhansl added a comment - code with copy performs faster. That's unexpected. These both generate Get requests, right? I was looking at this code again. In SQM.getKeyForNextColumn and SQL.getKeyForNextRow make a brandnew KV (copy of key, FQ, CQ) just to pass down a search key. If we already create one of your derived KVs there we'd save yet another copy.
        Hide
        ramkrishna.s.vasudevan added a comment -

        If we already create one of your derived KVs there we'd save yet another copy.

        Yes. Should be.

        That's unexpected. These both generate Get requests, right?

        Am not getting the real reason for that. May be can give another try with the YCSB. That would be better I feel.

        Show
        ramkrishna.s.vasudevan added a comment - If we already create one of your derived KVs there we'd save yet another copy. Yes. Should be. That's unexpected. These both generate Get requests, right? Am not getting the real reason for that. May be can give another try with the YCSB. That would be better I feel.
        Hide
        ramkrishna.s.vasudevan added a comment -

        One nice observation from Anoop (may not be fitting to this JIRA but in context with Cell). with the off heap efforts and having our Kv byte buffers moving to offheap, if we try to create Cells on them then we have to do a byte copy from the offheap array to on heap array. (Need to check anyway). So should our cell interfaces have support to work with ByteBuffers also. We have not digged in deeper but can keep an eye on this.

        Show
        ramkrishna.s.vasudevan added a comment - One nice observation from Anoop (may not be fitting to this JIRA but in context with Cell). with the off heap efforts and having our Kv byte buffers moving to offheap, if we try to create Cells on them then we have to do a byte copy from the offheap array to on heap array. (Need to check anyway). So should our cell interfaces have support to work with ByteBuffers also. We have not digged in deeper but can keep an eye on this.
        Hide
        Lars Hofhansl added a comment -

        Yeah, eventually we should switch exclusively to ByteBuffers as that is the only construct that can wrap data on and off heap.

        Show
        Lars Hofhansl added a comment - Yeah, eventually we should switch exclusively to ByteBuffers as that is the only construct that can wrap data on and off heap.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Able to get ~2% difference without copying bytes when using YCSB. (tested with random reads with 5 clients) and took an average over 5 runs. Without copying bytes it seems to be better.

        Show
        ramkrishna.s.vasudevan added a comment - Able to get ~2% difference without copying bytes when using YCSB. (tested with random reads with 5 clients) and took an average over 5 runs. Without copying bytes it seems to be better.
        Hide
        Lars Hofhansl added a comment -

        Should we tackle the ByteBuffer stuff now (in another jira)? Might be nice anyway. No need to keep byte[], offset, length around, just a ByteBuffer. Is that planned as part of the offheap work anyway?

        Show
        Lars Hofhansl added a comment - Should we tackle the ByteBuffer stuff now (in another jira)? Might be nice anyway. No need to keep byte[], offset, length around, just a ByteBuffer. Is that planned as part of the offheap work anyway?
        Hide
        ramkrishna.s.vasudevan added a comment -

        Yes, we can raise a JIRA. But working on something with respect to the readers and writers. That may first need a change to cell in place of KVs. I think as a first level can we get this in. This would help in making changes the index keys/block indices also. What do you think?
        Some changes may be required in the DBE interfaces also. That we can raise a seperate JIRA.

        Show
        ramkrishna.s.vasudevan added a comment - Yes, we can raise a JIRA. But working on something with respect to the readers and writers. That may first need a change to cell in place of KVs. I think as a first level can we get this in. This would help in making changes the index keys/block indices also. What do you think? Some changes may be required in the DBE interfaces also. That we can raise a seperate JIRA.
        Hide
        Anoop Sam John added a comment -

        Should we tackle the ByteBuffer stuff now (in another jira)?

        +1. This will get more clear pictures once we are into the offheap stuff. So along with that we can tackle this also.

        Show
        Anoop Sam John added a comment - Should we tackle the ByteBuffer stuff now (in another jira)? +1. This will get more clear pictures once we are into the offheap stuff. So along with that we can tackle this also.
        Hide
        Anoop Sam John added a comment -

        Without copying bytes it seems to be better.

        So as per this result, what is the % degrade over the current way (If any such degrade)

        Show
        Anoop Sam John added a comment - Without copying bytes it seems to be better. So as per this result, what is the % degrade over the current way (If any such degrade)
        Hide
        ramkrishna.s.vasudevan added a comment -

        So with the same environment, with same setup and random reads with 5 clients the difference between no copy and the exisiting way was about ~0.2 to 0.5 % degradation. But that is not very scientific.

        Show
        ramkrishna.s.vasudevan added a comment - So with the same environment, with same setup and random reads with 5 clients the difference between no copy and the exisiting way was about ~0.2 to 0.5 % degradation. But that is not very scientific.
        Hide
        stack added a comment -

        So not copying is faster using YCSB?

        You need to add good tests for the additions to CellComparator.

        + public int compareFlatKey(Cell left, Cell right) {

        What is a 'flat' key? Why do we have a Cell compare in KV? Should it be in CellUtil or CellComparator? Or this is how KV is now? It takes Cells? It seems so.

        Here too. Why in KV are we talking Cells?

        • public int compareTimestamps(final KeyValue left, final KeyValue right) {
          + public int compareTimestamps(final Cell left, final Cell right) {

        ... and here:

        • public int compareRows(final KeyValue left, final KeyValue right) {
          + public int compareRows(final Cell left, final Cell right) {

        That said, nice to see our use of Cell in KV compares.

        But I'd say KV methods should take KVs, not Cells? If it takes Cells, could be comparing a non-KV and it could actually work? Is this what we want? Maybe this is just uglyness left over from how KV has been used/abused up to this? But I'm thinking these compares would be Cell compares out in a CellUtil or CellCompare class?

        You have long lines in here Mr. ramkrishna.s.vasudevan

        You should mark these with @VisibleForTesting

        + // Only for test case purpose

        It is good that the byte compares added are not public.

        Shouldn't this be unsupportedoperationexception in your new key only class?

        + public byte[] getValueArray()

        { + return HConstants.EMPTY_BYTE_ARRAY; + }

        Same for value offset and length.

        Tags too?

        What is difference between KeyAloneKeyValue and a copy of the original Key but with no value? I am wary introducing new type? Or, a new type that just threw UnsupportedOperationException when you tried to get a value...

        Why we have to create new key when we pass to a comparator? Is it because we need to parse the bytes so can pass in an Object? That makes sense. I see now why it is needed. Suggest rename it as KeyOnlyKeyValue rather than KeyAlongKV. Also, the inner class needs a bit of a class comment on why it is needed: i.e. you have a byte array but comparator wants a Cell of some type.... rather than parse whole KV byte buffer.... This is used in the case where we have key only bytes; i.e. the block index?

        Is passing 0 correct in the below?

        + return comparator.compareFlatKey(key,
        + new KeyValue.KeyAloneKeyValue(current.keyBuffer, 0, current.keyLength));

        Should it be an offset? We do this '0' in a few places.

        Yeah, what is a 'flat' key? Is it key only?

        So, this is the replacement: seekToKeyInBlock ? Purge the old stuff!!!!

        We have tests for the above?

        Should this be a CellComparator rather than a KVComparator:

        +
        + public int compareKey(KVComparator comparator, Cell key);

        (Sorry if you answered this already).

        Needs doc: + public static int binarySearch(byte[][] arr, Cell leftCell, RawComparator<Cell> comparator)

        { The array of byte arrays has Cells in it or it seems KVs? Will it always be arrays of byte arrays? Or will the binary search be in a block? Or is the 'arr' here a block? Formatting: - forceBeforeOnExactMatch); - }

        else

        { + forceBeforeOnExactMatch); + }

        else {
        return seekToOrBeforeUsingPositionAtOrAfter(keyOnlyBytes, offset, length,

        • forceBeforeOnExactMatch);
          + forceBeforeOnExactMatch);

        We creating a KV where we did not before here?

        • return this.delegate.seekBefore(key, offset, length);
          + return seekBefore(new KeyValue.KeyAloneKeyValue(key, offset, length));

        Or is it that I am just misreading? (It is being created elsewhere anyways)

        Why we add these byte-based methods?

        + public int reseekTo(byte[] key) throws IOException {

        + public int reseekTo(byte[] key, int offset, int length)

        We should let this out?

        + public org.apache.hadoop.hbase.io.hfile.HFile.Reader getReader() {

        Any chance of micro benchmarks on difference between this patch and what was there before?

        Why we adding a method that passes key as bytes?

        + public BlockWithScanInfo loadDataBlockWithScanInfo(final byte[] key, int keyOffset,
        + int keyLength, HFileBlock currentBlock, boolean cacheBlocks,
        + boolean pread, boolean isCompaction)
        + throws IOException {

        The ByteBuffers here come from where?

        + static int locateNonRootIndexEntry(ByteBuffer nonRootBlock, Cell key, KVComparator comparator) {
        + int entryIndex = binarySearchNonRootIndex(key, nonRootBlock, comparator);

        Yeah, why you add these byte array versions?

        + public boolean seekBefore(byte[] key, int offset, int length) throws IOException {
        + HFileBlock seekToBlock = reader.getDataBlockIndexReader().seekToDataBlock(key, offset,
        + length, block, cacheBlocks, pread, isCompaction);

        Is there duplicated code between HFile2 and HFile3?

        Just remove?

        + @Deprecated
        int seekTo(byte[] key) throws IOException;
        + @Deprecated
        int seekTo(byte[] key, int offset, int length) throws IOException;

        This looks good:

        • int result = s.reseekTo(k.getBuffer(), k.getKeyOffset(), k.getKeyLength());
          + int result = s.reseekTo(k);

        ... and this:

        • int ret = scanner.reseekTo(getLastOnCol(curr).getKey());
          + int ret = scanner.reseekTo(getLastOnCol(curr));

        We need this: TestNewReseekTo ? Why not just change the current test to use Cells instead?

        Is TestNewSeekTo a copy of the old code? If so, just replace the byte based with the new Cell based?

        Patch looks great otherwise. Good on you ramkrishna.s.vasudevan

        Show
        stack added a comment - So not copying is faster using YCSB? You need to add good tests for the additions to CellComparator. + public int compareFlatKey(Cell left, Cell right) { What is a 'flat' key? Why do we have a Cell compare in KV? Should it be in CellUtil or CellComparator? Or this is how KV is now? It takes Cells? It seems so. Here too. Why in KV are we talking Cells? public int compareTimestamps(final KeyValue left, final KeyValue right) { + public int compareTimestamps(final Cell left, final Cell right) { ... and here: public int compareRows(final KeyValue left, final KeyValue right) { + public int compareRows(final Cell left, final Cell right) { That said, nice to see our use of Cell in KV compares. But I'd say KV methods should take KVs, not Cells? If it takes Cells, could be comparing a non-KV and it could actually work? Is this what we want? Maybe this is just uglyness left over from how KV has been used/abused up to this? But I'm thinking these compares would be Cell compares out in a CellUtil or CellCompare class? You have long lines in here Mr. ramkrishna.s.vasudevan You should mark these with @VisibleForTesting + // Only for test case purpose It is good that the byte compares added are not public. Shouldn't this be unsupportedoperationexception in your new key only class? + public byte[] getValueArray() { + return HConstants.EMPTY_BYTE_ARRAY; + } Same for value offset and length. Tags too? What is difference between KeyAloneKeyValue and a copy of the original Key but with no value? I am wary introducing new type? Or, a new type that just threw UnsupportedOperationException when you tried to get a value... Why we have to create new key when we pass to a comparator? Is it because we need to parse the bytes so can pass in an Object? That makes sense. I see now why it is needed. Suggest rename it as KeyOnlyKeyValue rather than KeyAlongKV. Also, the inner class needs a bit of a class comment on why it is needed: i.e. you have a byte array but comparator wants a Cell of some type.... rather than parse whole KV byte buffer.... This is used in the case where we have key only bytes; i.e. the block index? Is passing 0 correct in the below? + return comparator.compareFlatKey(key, + new KeyValue.KeyAloneKeyValue(current.keyBuffer, 0, current.keyLength)); Should it be an offset? We do this '0' in a few places. Yeah, what is a 'flat' key? Is it key only? So, this is the replacement: seekToKeyInBlock ? Purge the old stuff!!!! We have tests for the above? Should this be a CellComparator rather than a KVComparator: + + public int compareKey(KVComparator comparator, Cell key); (Sorry if you answered this already). Needs doc: + public static int binarySearch(byte[][] arr, Cell leftCell, RawComparator<Cell> comparator) { The array of byte arrays has Cells in it or it seems KVs? Will it always be arrays of byte arrays? Or will the binary search be in a block? Or is the 'arr' here a block? Formatting: - forceBeforeOnExactMatch); - } else { + forceBeforeOnExactMatch); + } else { return seekToOrBeforeUsingPositionAtOrAfter(keyOnlyBytes, offset, length, forceBeforeOnExactMatch); + forceBeforeOnExactMatch); We creating a KV where we did not before here? return this.delegate.seekBefore(key, offset, length); + return seekBefore(new KeyValue.KeyAloneKeyValue(key, offset, length)); Or is it that I am just misreading? (It is being created elsewhere anyways) Why we add these byte-based methods? + public int reseekTo(byte[] key) throws IOException { + public int reseekTo(byte[] key, int offset, int length) We should let this out? + public org.apache.hadoop.hbase.io.hfile.HFile.Reader getReader() { Any chance of micro benchmarks on difference between this patch and what was there before? Why we adding a method that passes key as bytes? + public BlockWithScanInfo loadDataBlockWithScanInfo(final byte[] key, int keyOffset, + int keyLength, HFileBlock currentBlock, boolean cacheBlocks, + boolean pread, boolean isCompaction) + throws IOException { The ByteBuffers here come from where? + static int locateNonRootIndexEntry(ByteBuffer nonRootBlock, Cell key, KVComparator comparator) { + int entryIndex = binarySearchNonRootIndex(key, nonRootBlock, comparator); Yeah, why you add these byte array versions? + public boolean seekBefore(byte[] key, int offset, int length) throws IOException { + HFileBlock seekToBlock = reader.getDataBlockIndexReader().seekToDataBlock(key, offset, + length, block, cacheBlocks, pread, isCompaction); Is there duplicated code between HFile2 and HFile3? Just remove? + @Deprecated int seekTo(byte[] key) throws IOException; + @Deprecated int seekTo(byte[] key, int offset, int length) throws IOException; This looks good: int result = s.reseekTo(k.getBuffer(), k.getKeyOffset(), k.getKeyLength()); + int result = s.reseekTo(k); ... and this: int ret = scanner.reseekTo(getLastOnCol(curr).getKey()); + int ret = scanner.reseekTo(getLastOnCol(curr)); We need this: TestNewReseekTo ? Why not just change the current test to use Cells instead? Is TestNewSeekTo a copy of the old code? If so, just replace the byte based with the new Cell based? Patch looks great otherwise. Good on you ramkrishna.s.vasudevan
        Hide
        ramkrishna.s.vasudevan added a comment -

        But I'd say KV methods should take KVs, not Cells? If it takes Cells, could be comparing a non-KV and it could actually work? Is this what we want? Maybe this is just uglyness left over from how KV has been used/abused up to this? But I'm thinking these compares would be Cell compares out in a CellUtil or CellCompare class?

        See HBASE-10532. All the above compares have been moved over there. But for this JIRA I have still maintained things as KVComparator. Did not want to change the KVComparator part here. I could change that also and call CellUtil or CellComparator. Let me see how to handle that here.

        Shouldn't this be unsupportedoperationexception in your new key only class?

        I think yes. But I faced some issue, hence added it. Let me check it once again in the next patch.

        Why we have to create new key when we pass to a comparator?

        I will add suitable comments.

        Should it be an offset? We do this '0' in a few places.

        Where ever offset is needed I have used that. whereever 0 is needed I have used 0. I can cross check once again.

        So, this is the replacement: seekToKeyInBlock ? Purge the old stuff!!!!

        I did not do that just for sake of easy review. Will purge all the duplicate code.

        { The array of byte arrays has Cells in it or it seems KVs? Will it always be arrays of byte arrays?

        I would suggest in the follow up JIRAs we can change to Cells? rather than byte[]

        All the last comments are about the refactoring part. I have not removed the old code and hence you say them. I can remove them too. testReseek() i will change to work with Cells, but thing to be noted is that previously it was working with RawBytecomparator, am planning to change to KVComparator only. Same with TestSeekTo.

        Show
        ramkrishna.s.vasudevan added a comment - But I'd say KV methods should take KVs, not Cells? If it takes Cells, could be comparing a non-KV and it could actually work? Is this what we want? Maybe this is just uglyness left over from how KV has been used/abused up to this? But I'm thinking these compares would be Cell compares out in a CellUtil or CellCompare class? See HBASE-10532 . All the above compares have been moved over there. But for this JIRA I have still maintained things as KVComparator. Did not want to change the KVComparator part here. I could change that also and call CellUtil or CellComparator. Let me see how to handle that here. Shouldn't this be unsupportedoperationexception in your new key only class? I think yes. But I faced some issue, hence added it. Let me check it once again in the next patch. Why we have to create new key when we pass to a comparator? I will add suitable comments. Should it be an offset? We do this '0' in a few places. Where ever offset is needed I have used that. whereever 0 is needed I have used 0. I can cross check once again. So, this is the replacement: seekToKeyInBlock ? Purge the old stuff!!!! I did not do that just for sake of easy review. Will purge all the duplicate code. { The array of byte arrays has Cells in it or it seems KVs? Will it always be arrays of byte arrays? I would suggest in the follow up JIRAs we can change to Cells? rather than byte[] All the last comments are about the refactoring part. I have not removed the old code and hence you say them. I can remove them too. testReseek() i will change to work with Cells, but thing to be noted is that previously it was working with RawBytecomparator, am planning to change to KVComparator only. Same with TestSeekTo.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Thanks for the review Stack. I have an rb link also, which would help to review better. Will keep updating my patch over there too.
        https://reviews.apache.org/r/18570/

        Let me come up with a complete micro benchmark after updating the comments and remvoing the duplicate code.

        Show
        ramkrishna.s.vasudevan added a comment - Thanks for the review Stack. I have an rb link also, which would help to review better. Will keep updating my patch over there too. https://reviews.apache.org/r/18570/ Let me come up with a complete micro benchmark after updating the comments and remvoing the duplicate code.
        Hide
        ramkrishna.s.vasudevan added a comment -

        The ByteBuffers here come from where?

        It is already as per the existing logic.

                ByteBuffer buffer = block.getBufferWithoutHeader();
                index = locateNonRootIndexEntry(buffer, key, comparator);
        
        Show
        ramkrishna.s.vasudevan added a comment - The ByteBuffers here come from where? It is already as per the existing logic. ByteBuffer buffer = block.getBufferWithoutHeader(); index = locateNonRootIndexEntry(buffer, key, comparator);
        Hide
        ramkrishna.s.vasudevan added a comment -

        I think for now we need to add Cell comparison methods in KVComparator also, because we have lot of code internally that references this Kvcomparator and calls comparator.compareXXX().
        So I would suggest we could make the change in using a CellComparator in another JIRA. If not for now I will do this

        protected int compareRowKey(final Cell left, final Cell right) {
              CellComparator.compareRows(left, right);
            }
        
        Show
        ramkrishna.s.vasudevan added a comment - I think for now we need to add Cell comparison methods in KVComparator also, because we have lot of code internally that references this Kvcomparator and calls comparator.compareXXX(). So I would suggest we could make the change in using a CellComparator in another JIRA. If not for now I will do this protected int compareRowKey( final Cell left, final Cell right) { CellComparator.compareRows(left, right); }
        Hide
        ramkrishna.s.vasudevan added a comment -

        Am also planning to remove the RawBytesComparator in all the testcases. Once changing to cell this comparison may not work correctly.

        Show
        ramkrishna.s.vasudevan added a comment - Am also planning to remove the RawBytesComparator in all the testcases. Once changing to cell this comparison may not work correctly.
        Hide
        Andrew Purtell added a comment -

        (Ram) So should our cell interfaces have support to work with ByteBuffers also.

        ByteRange

        (Lars) we should switch exclusively to ByteBuffers as that is the only construct that can wrap data on and off heap.

        +1, but ByteRange, not BB

        Show
        Andrew Purtell added a comment - (Ram) So should our cell interfaces have support to work with ByteBuffers also. ByteRange (Lars) we should switch exclusively to ByteBuffers as that is the only construct that can wrap data on and off heap. +1, but ByteRange, not BB
        Hide
        Anoop Sam John added a comment -

        So we can have one off heap buffer backed ByteRange impl also (During the offheap work).

        Show
        Anoop Sam John added a comment - So we can have one off heap buffer backed ByteRange impl also (During the offheap work).
        Hide
        Andrew Purtell added a comment -

        So we can have one off heap buffer backed ByteRange impl also (During the offheap work)

        Right. ByteRange will need to evolve, but we can take care to avoid issues with using ByteBuffer directly that are suboptimal for us, such as the inability to inline get* and put* methods, or bounds checking we can elide. Also we could have multiple allocators for on and off heap arenas, at least in the beginning while we are sorting this all out, backed by JDK ByteBuffer, Netty ByteBuf, IBM Java BoxedPackedObject (just an example of something more esoteric), and so on.

        Show
        Andrew Purtell added a comment - So we can have one off heap buffer backed ByteRange impl also (During the offheap work) Right. ByteRange will need to evolve, but we can take care to avoid issues with using ByteBuffer directly that are suboptimal for us, such as the inability to inline get* and put* methods, or bounds checking we can elide. Also we could have multiple allocators for on and off heap arenas, at least in the beginning while we are sorting this all out, backed by JDK ByteBuffer, Netty ByteBuf, IBM Java BoxedPackedObject (just an example of something more esoteric), and so on.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Shouldn't this be unsupportedoperationexception in your new key only class?

        For the value part we can throw UnSupportedException but for the Tag part we cannot because in the latest comparisons we do we also compare the seqid added during WAL replay which is inside a tag. So to find an exact key we may end up comparing the tag with seqid also.

        Show
        ramkrishna.s.vasudevan added a comment - Shouldn't this be unsupportedoperationexception in your new key only class? For the value part we can throw UnSupportedException but for the Tag part we cannot because in the latest comparisons we do we also compare the seqid added during WAL replay which is inside a tag. So to find an exact key we may end up comparing the tag with seqid also.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Updated patch. Removes repetitive code. Addresses review comments.
        Stack, the changes from using KVComparator to CellComparator has to be done in another issue. For now using the CellComparator and calling it in KVComparator.
        Reviews and comments are welcome.

        Show
        ramkrishna.s.vasudevan added a comment - Updated patch. Removes repetitive code. Addresses review comments. Stack, the changes from using KVComparator to CellComparator has to be done in another issue. For now using the CellComparator and calling it in KVComparator. Reviews and comments are welcome.
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_3.patch [ 12634704 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        I think some problem with the RB. Getting this error
        'The file "hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparator.java" (revision 18d54f8) was not found in the repository'.
        HFilePerformanceEvaluation and TestHFilePerformance are still using KeyValue.RawBytesComparator. Can we change that?
        Will prepare a micro benchmark report in the beginning of next week.

        Show
        ramkrishna.s.vasudevan added a comment - I think some problem with the RB. Getting this error 'The file "hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparator.java" (revision 18d54f8) was not found in the repository'. HFilePerformanceEvaluation and TestHFilePerformance are still using KeyValue.RawBytesComparator. Can we change that? Will prepare a micro benchmark report in the beginning of next week.
        ramkrishna.s.vasudevan made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12634704/HBASE-10531_3.patch
        against trunk revision .
        ATTACHMENT ID: 12634704

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 21 new or modified tests.

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8995//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12634704/HBASE-10531_3.patch against trunk revision . ATTACHMENT ID: 12634704 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 21 new or modified tests. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8995//console This message is automatically generated.
        ramkrishna.s.vasudevan made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        Updated patch. Previous patch did not apply cleanly. So updated against latest code.

        Show
        ramkrishna.s.vasudevan added a comment - Updated patch. Previous patch did not apply cleanly. So updated against latest code.
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_4.patch [ 12635034 ]
        ramkrishna.s.vasudevan made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12635034/HBASE-10531_4.patch
        against trunk revision .
        ATTACHMENT ID: 12635034

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 26 new or modified tests.

        +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

        +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

        -1 javadoc. The javadoc tool appears to have generated 5 warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        -1 findbugs. The patch appears to introduce 6 new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 lineLengths. The patch introduces the following lines longer than 100:
        + public static int compareRowsWithCommonFamilyPrefix(Cell left, Cell right, int familyCommonPrefix) {
        + public static int compareRowsWithCommmonQualifierPrefix(Cell left, Cell right, int qualCommonPrefix) {
        + if (key.getFamilyLength() + key.getQualifierLength() == 0 && key.getTypeByte() == Type.Minimum.getCode()) {
        + if (right.getFamilyLength() + right.getQualifierLength() == 0 && right.getTypeByte() == Type.Minimum.getCode()) {
        + return comparator.compare(key, new KeyValue.KeyOnlyKeyValue(bb.array(), bb.arrayOffset(), bb.limit()));
        + "Seeking for a key in bottom of file, but key exists in top of file, failed on seekBefore(midkey)");
        + BlockWithScanInfo blockWithScanInfo = loadDataBlockWithScanInfo(key, currentBlock, cacheBlocks,
        + public BlockWithScanInfo loadDataBlockWithScanInfo(Cell key, HFileBlock currentBlock, boolean cacheBlocks,
        + static int binarySearchNonRootIndex(Cell key, ByteBuffer nonRootIndex, KVComparator comparator) {
        + new KeyValue.KeyOnlyKeyValue(nextIndexedKey, 0, nextIndexedKey.length)) < 0)) {

        +1 site. The mvn site goal succeeds with this patch.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.TestCheckTestClasses

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12635034/HBASE-10531_4.patch against trunk revision . ATTACHMENT ID: 12635034 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 26 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. -1 javadoc . The javadoc tool appears to have generated 5 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 6 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + public static int compareRowsWithCommonFamilyPrefix(Cell left, Cell right, int familyCommonPrefix) { + public static int compareRowsWithCommmonQualifierPrefix(Cell left, Cell right, int qualCommonPrefix) { + if (key.getFamilyLength() + key.getQualifierLength() == 0 && key.getTypeByte() == Type.Minimum.getCode()) { + if (right.getFamilyLength() + right.getQualifierLength() == 0 && right.getTypeByte() == Type.Minimum.getCode()) { + return comparator.compare(key, new KeyValue.KeyOnlyKeyValue(bb.array(), bb.arrayOffset(), bb.limit())); + "Seeking for a key in bottom of file, but key exists in top of file, failed on seekBefore(midkey)"); + BlockWithScanInfo blockWithScanInfo = loadDataBlockWithScanInfo(key, currentBlock, cacheBlocks, + public BlockWithScanInfo loadDataBlockWithScanInfo(Cell key, HFileBlock currentBlock, boolean cacheBlocks, + static int binarySearchNonRootIndex(Cell key, ByteBuffer nonRootIndex, KVComparator comparator) { + new KeyValue.KeyOnlyKeyValue(nextIndexedKey, 0, nextIndexedKey.length)) < 0)) { +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.TestCheckTestClasses Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9022//console This message is automatically generated.
        ramkrishna.s.vasudevan made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        Wraps the long lines and introduces the category for the failed test.

        Show
        ramkrishna.s.vasudevan added a comment - Wraps the long lines and introduces the category for the failed test.
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_5.patch [ 12635067 ]
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_5.patch [ 12635067 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        Reattaching patch.

        Show
        ramkrishna.s.vasudevan added a comment - Reattaching patch.
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_5.patch [ 12635069 ]
        ramkrishna.s.vasudevan made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        Did some benchmarks
        Mainly took a reading with single node and ran YCSB for pure reads (gets) and sequential scans.
        The gets were averaged over 50 runs
        Without patch
        Throughput : 1418.42019 ops/sec
        With patch
        Throughput : 1422.117704

        For scans averaged over 15 runs
        Without patch
        Throughput :351.0178049
        With patch
        Throughput : 351.4325864

        Every run tries to perform 50000 operations. So overall I don't find much difference in the working of reads with and without patch. I have take the plain case(no encoding). I can add results with some encodings like Fast_diff also.

        Show
        ramkrishna.s.vasudevan added a comment - Did some benchmarks Mainly took a reading with single node and ran YCSB for pure reads (gets) and sequential scans. The gets were averaged over 50 runs Without patch Throughput : 1418.42019 ops/sec With patch Throughput : 1422.117704 For scans averaged over 15 runs Without patch Throughput :351.0178049 With patch Throughput : 351.4325864 Every run tries to perform 50000 operations. So overall I don't find much difference in the working of reads with and without patch. I have take the plain case(no encoding). I can add results with some encodings like Fast_diff also.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12635069/HBASE-10531_5.patch
        against trunk revision .
        ATTACHMENT ID: 12635069

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 26 new or modified tests.

        +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

        +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 lineLengths. The patch introduces the following lines longer than 100:
        + return loadBlockAndSeekToKey(blockWithScanInfo.getHFileBlock(), blockWithScanInfo.getNextIndexedKey(), rewind, key,
        + KeyValue cell = new KeyValue(k , Bytes.toBytes("f"), Bytes.toBytes("q"), Bytes.toBytes("val"));
        + + Bytes.toStringBinary(keys[i]) + ")", 0, scanner.seekTo(KeyValue.createKeyValueFromKey(keys[i])));
        + kv = new KeyValue(Bytes.toBytes(key), Bytes.toBytes("family"),Bytes.toBytes("qual"),Bytes.toBytes(value));

        +1 site. The mvn site goal succeeds with this patch.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.client.TestMultiParallel
        org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor
        org.apache.hadoop.hbase.regionserver.TestMinorCompaction
        org.apache.hadoop.hbase.io.hfile.TestHFileSeek
        org.apache.hadoop.hbase.client.TestFromClientSide
        org.apache.hadoop.hbase.regionserver.TestGetClosestAtOrBefore
        org.apache.hadoop.hbase.regionserver.TestBlocksRead
        org.apache.hadoop.hbase.master.TestDistributedLogSplitting

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12635069/HBASE-10531_5.patch against trunk revision . ATTACHMENT ID: 12635069 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 26 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + return loadBlockAndSeekToKey(blockWithScanInfo.getHFileBlock(), blockWithScanInfo.getNextIndexedKey(), rewind, key, + KeyValue cell = new KeyValue(k , Bytes.toBytes("f"), Bytes.toBytes("q"), Bytes.toBytes("val")); + + Bytes.toStringBinary(keys [i] ) + ")", 0, scanner.seekTo(KeyValue.createKeyValueFromKey(keys [i] ))); + kv = new KeyValue(Bytes.toBytes(key), Bytes.toBytes("family"),Bytes.toBytes("qual"),Bytes.toBytes(value)); +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.client.TestMultiParallel org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor org.apache.hadoop.hbase.regionserver.TestMinorCompaction org.apache.hadoop.hbase.io.hfile.TestHFileSeek org.apache.hadoop.hbase.client.TestFromClientSide org.apache.hadoop.hbase.regionserver.TestGetClosestAtOrBefore org.apache.hadoop.hbase.regionserver.TestBlocksRead org.apache.hadoop.hbase.master.TestDistributedLogSplitting Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9026//console This message is automatically generated.
        ramkrishna.s.vasudevan made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Hide
        ramkrishna.s.vasudevan added a comment - - edited

        All the failed test cases passes with this. Note that, for the blooms and the metaindexReader we still go with RawBytescomparator. So HFileBlockIndex.rootBlockContainingKey() is retained.

        Show
        ramkrishna.s.vasudevan added a comment - - edited All the failed test cases passes with this. Note that, for the blooms and the metaindexReader we still go with RawBytescomparator. So HFileBlockIndex.rootBlockContainingKey() is retained.
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_6.patch [ 12635282 ]
        ramkrishna.s.vasudevan made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12635282/HBASE-10531_6.patch
        against trunk revision .
        ATTACHMENT ID: 12635282

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 26 new or modified tests.

        +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

        +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 lineLengths. The patch introduces the following lines longer than 100:
        + return loadBlockAndSeekToKey(blockWithScanInfo.getHFileBlock(), blockWithScanInfo.getNextIndexedKey(), rewind, key,
        + KeyValue cell = new KeyValue(k , Bytes.toBytes("f"), Bytes.toBytes("q"), Bytes.toBytes("val"));
        + + Bytes.toStringBinary(keys[i]) + ")", 0, scanner.seekTo(KeyValue.createKeyValueFromKey(keys[i])));
        + kv = new KeyValue(Bytes.toBytes(key), Bytes.toBytes("family"),Bytes.toBytes("qual"),Bytes.toBytes(value));

        +1 site. The mvn site goal succeeds with this patch.

        +1 core tests. The patch passed unit tests in .

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12635282/HBASE-10531_6.patch against trunk revision . ATTACHMENT ID: 12635282 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 26 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + return loadBlockAndSeekToKey(blockWithScanInfo.getHFileBlock(), blockWithScanInfo.getNextIndexedKey(), rewind, key, + KeyValue cell = new KeyValue(k , Bytes.toBytes("f"), Bytes.toBytes("q"), Bytes.toBytes("val")); + + Bytes.toStringBinary(keys [i] ) + ")", 0, scanner.seekTo(KeyValue.createKeyValueFromKey(keys [i] ))); + kv = new KeyValue(Bytes.toBytes(key), Bytes.toBytes("family"),Bytes.toBytes("qual"),Bytes.toBytes(value)); +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9032//console This message is automatically generated.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Am very sorry. I was focussing on the test case and forgot wrapping the lines.

        Show
        ramkrishna.s.vasudevan added a comment - Am very sorry. I was focussing on the test case and forgot wrapping the lines.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Once review is done will update patch. Stack is reviewing, more reviews welcome.

        Show
        ramkrishna.s.vasudevan added a comment - Once review is done will update patch. Stack is reviewing, more reviews welcome.
        ramkrishna.s.vasudevan made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        Updated patch. Addresses the review comments. Fixes long lines and testcase passes with this.
        Stack had a point saying this patch may slow down in the read path as we are now changing a simple byte[] comparison to creating an object and then comparing. But to have cells in the read path we need to go in phases. When I tried to change everywhere to cells the changes were big and difficult to follow. After this patch if we are able to change the DBE to work with Cells then we will be in a better shape to work with Cells. Also the comparators needs to be changed.

        Show
        ramkrishna.s.vasudevan added a comment - Updated patch. Addresses the review comments. Fixes long lines and testcase passes with this. Stack had a point saying this patch may slow down in the read path as we are now changing a simple byte[] comparison to creating an object and then comparing. But to have cells in the read path we need to go in phases. When I tried to change everywhere to cells the changes were big and difficult to follow. After this patch if we are able to change the DBE to work with Cells then we will be in a better shape to work with Cells. Also the comparators needs to be changed.
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_7.patch [ 12635491 ]
        ramkrishna.s.vasudevan made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12635491/HBASE-10531_7.patch
        against trunk revision .
        ATTACHMENT ID: 12635491

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 32 new or modified tests.

        +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

        +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        -1 site. The patch appears to cause mvn site goal to fail.

        +1 core tests. The patch passed unit tests in .

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12635491/HBASE-10531_7.patch against trunk revision . ATTACHMENT ID: 12635491 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 32 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9041//console This message is automatically generated.
        Hide
        Andrew Purtell added a comment - - edited

        +1

        I reviewed the patch as a transitional change and the refactoring looks good to me. Also followed along up on reviewboard where Stack gave this a good look.

        What are the follow up JIRAs? Maybe put a comment here leading to them.

        Show
        Andrew Purtell added a comment - - edited +1 I reviewed the patch as a transitional change and the refactoring looks good to me. Also followed along up on reviewboard where Stack gave this a good look. What are the follow up JIRAs? Maybe put a comment here leading to them.
        Hide
        stack added a comment -

        +1

        Add followup jiras here as per Andrew.

        Show
        stack added a comment - +1 Add followup jiras here as per Andrew.
        Hide
        ramkrishna.s.vasudevan added a comment -

        The releated JIRAs include the one added as subtask in HBASE-7320
        https://issues.apache.org/jira/browse/HBASE-7319
        https://issues.apache.org/jira/browse/HBASE-10680
        https://issues.apache.org/jira/browse/HBASE-10800
        https://issues.apache.org/jira/browse/HBASE-10801

        Doing HBASE-10680 in a way could avoid creation of the new type of KeyValue because both sides will become cell.

        Show
        ramkrishna.s.vasudevan added a comment - The releated JIRAs include the one added as subtask in HBASE-7320 https://issues.apache.org/jira/browse/HBASE-7319 https://issues.apache.org/jira/browse/HBASE-10680 https://issues.apache.org/jira/browse/HBASE-10800 https://issues.apache.org/jira/browse/HBASE-10801 Doing HBASE-10680 in a way could avoid creation of the new type of KeyValue because both sides will become cell.
        Hide
        Anoop Sam John added a comment -

        public static int compareRowsWithCommonFamilyPrefix(Cell left, Cell right, int familyCommonPrefix)
        This compares 2 families considering common part . Better name can be compareFamiliesWithCommonPrefix ?
        Similar case with compareRowsWithCommonQualifierPrefix

        Show
        Anoop Sam John added a comment - public static int compareRowsWithCommonFamilyPrefix(Cell left, Cell right, int familyCommonPrefix) This compares 2 families considering common part . Better name can be compareFamiliesWithCommonPrefix ? Similar case with compareRowsWithCommonQualifierPrefix
        Hide
        ramkrishna.s.vasudevan added a comment -

        Sure.. I would wait for your review also, before proceeding with the commit.

        Show
        ramkrishna.s.vasudevan added a comment - Sure.. I would wait for your review also, before proceeding with the commit.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Any more comments Anoop Sam John? If not will update your comments and go for the commit.

        Show
        ramkrishna.s.vasudevan added a comment - Any more comments Anoop Sam John ? If not will update your comments and go for the commit.
        ramkrishna.s.vasudevan made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        Same patch as in RB

        Show
        ramkrishna.s.vasudevan added a comment - Same patch as in RB
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_8.patch [ 12635994 ]
        ramkrishna.s.vasudevan made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12635994/HBASE-10531_8.patch
        against trunk revision .
        ATTACHMENT ID: 12635994

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 32 new or modified tests.

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9061//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12635994/HBASE-10531_8.patch against trunk revision . ATTACHMENT ID: 12635994 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 32 new or modified tests. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9061//console This message is automatically generated.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Latest commits did not allow the patch to compile

        Show
        ramkrishna.s.vasudevan added a comment - Latest commits did not allow the patch to compile
        ramkrishna.s.vasudevan made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_9.patch [ 12636008 ]
        ramkrishna.s.vasudevan made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12636008/HBASE-10531_9.patch
        against trunk revision .
        ATTACHMENT ID: 12636008

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 32 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        +1 site. The mvn site goal succeeds with this patch.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.regionserver.TestBlocksScanned

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12636008/HBASE-10531_9.patch against trunk revision . ATTACHMENT ID: 12636008 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 32 new or modified tests. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.TestBlocksScanned Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9062//console This message is automatically generated.
        ramkrishna.s.vasudevan made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        Latest patch. All test cases passes with this. The bug in the SamePrefixComparator in previous patch has been corrected.

        Show
        ramkrishna.s.vasudevan added a comment - Latest patch. All test cases passes with this. The bug in the SamePrefixComparator in previous patch has been corrected.
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_12.patch [ 12636953 ]
        ramkrishna.s.vasudevan made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12636953/HBASE-10531_12.patch
        against trunk revision .
        ATTACHMENT ID: 12636953

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 34 new or modified tests.

        -1 javadoc. The javadoc tool appears to have generated 6 warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 lineLengths. The patch introduces the following lines longer than 100:
        + && qualCommonPrefix < (current.lastCommonPrefix - (3 + right.getRowLength() + right
        + List<DataBlockEncoder.EncodedSeeker> encodedSeekers = new ArrayList<DataBlockEncoder.EncodedSeeker>();
        + HFileBlockEncodingContext encodingCtx = getEncodingContext(Compression.Algorithm.NONE, encoding);

        +1 site. The mvn site goal succeeds with this patch.

        +1 core tests. The patch passed unit tests in .

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12636953/HBASE-10531_12.patch against trunk revision . ATTACHMENT ID: 12636953 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 34 new or modified tests. -1 javadoc . The javadoc tool appears to have generated 6 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + && qualCommonPrefix < (current.lastCommonPrefix - (3 + right.getRowLength() + right + List<DataBlockEncoder.EncodedSeeker> encodedSeekers = new ArrayList<DataBlockEncoder.EncodedSeeker>(); + HFileBlockEncodingContext encodingCtx = getEncodingContext(Compression.Algorithm.NONE, encoding); +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9100//console This message is automatically generated.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Will wrap the long lines before commit. Should be the final patch as things looks fine.
        TestSeekToblockWithEncoders verifies that part of the code where the commonprefix is getting used while reading with encoders. Once this goes in we could start targetting other issues also.

        Show
        ramkrishna.s.vasudevan added a comment - Will wrap the long lines before commit. Should be the final patch as things looks fine. TestSeekToblockWithEncoders verifies that part of the code where the commonprefix is getting used while reading with encoders. Once this goes in we could start targetting other issues also.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Planning to commit this today EOD. Pls have look at this. Thought I got 2 +1s there is a change done in the samePrefixComparator code.
        Anoop Sam John
        Can you have a look?

        Show
        ramkrishna.s.vasudevan added a comment - Planning to commit this today EOD. Pls have look at this. Thought I got 2 +1s there is a change done in the samePrefixComparator code. Anoop Sam John Can you have a look?
        Hide
        Anoop Sam John added a comment -

        Sure Ram.. Sorry for the delay.

        Show
        Anoop Sam John added a comment - Sure Ram.. Sorry for the delay.
        Hide
        stack added a comment -

        Are the findbugs and javadocs yours ramkrishna.s.vasudevan? Fine fixing them in a follow on I'd say.

        Show
        stack added a comment - Are the findbugs and javadocs yours ramkrishna.s.vasudevan ? Fine fixing them in a follow on I'd say.
        Hide
        ramkrishna.s.vasudevan added a comment -

        The reports are missing. May be will give one more run and get teh lastet reports and fix them before commit. Anoop has given some nice comments over in RB. Checking them out.

        Show
        ramkrishna.s.vasudevan added a comment - The reports are missing. May be will give one more run and get teh lastet reports and fix them before commit. Anoop has given some nice comments over in RB. Checking them out.
        Hide
        Anoop Sam John added a comment -

        Are the findbugs and javadocs yours

        No. These are fixed in trunk now.

        Show
        Anoop Sam John added a comment - Are the findbugs and javadocs yours No. These are fixed in trunk now.
        ramkrishna.s.vasudevan made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        Latest patch. I think this should be final. Addresses Anoop's comments. Thanks a lot for a good review.

        Show
        ramkrishna.s.vasudevan added a comment - Latest patch. I think this should be final. Addresses Anoop's comments. Thanks a lot for a good review.
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_13.patch [ 12637603 ]
        Hide
        Anoop Sam John added a comment -

        Gone through the latest changes in BufferedDataBlockEncoder. Looks good Ram. Thanks for the continued effort.

        Show
        Anoop Sam John added a comment - Gone through the latest changes in BufferedDataBlockEncoder. Looks good Ram. Thanks for the continued effort.
        Anoop Sam John made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        ramkrishna.s.vasudevan made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        REsubmitting again for QA. Got a green run when a ran in my linux box.

        Show
        ramkrishna.s.vasudevan added a comment - REsubmitting again for QA. Got a green run when a ran in my linux box.
        ramkrishna.s.vasudevan made changes -
        Attachment HBASE-10531_13.patch [ 12637612 ]
        ramkrishna.s.vasudevan made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12637612/HBASE-10531_13.patch
        against trunk revision .
        ATTACHMENT ID: 12637612

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 34 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        +1 site. The mvn site goal succeeds with this patch.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.master.TestRegionPlacement

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12637612/HBASE-10531_13.patch against trunk revision . ATTACHMENT ID: 12637612 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 34 new or modified tests. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.master.TestRegionPlacement Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9135//console This message is automatically generated.
        Hide
        ramkrishna.s.vasudevan added a comment -

        Committed to trunk. Thanks for the reviews Stack and Andy.
        Thanks for the indepth reviews from Anoop to spot some issues too.

        Show
        ramkrishna.s.vasudevan added a comment - Committed to trunk. Thanks for the reviews Stack and Andy. Thanks for the indepth reviews from Anoop to spot some issues too.
        ramkrishna.s.vasudevan made changes -
        Status Patch Available [ 10002 ] Resolved [ 5 ]
        Hadoop Flags Reviewed [ 10343 ]
        Resolution Fixed [ 1 ]
        Hide
        ramkrishna.s.vasudevan added a comment -

        The test failure seems unrelated.

        java.lang.ArrayIndexOutOfBoundsException: 10
        	at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:343)
        	at org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
        	at org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
        	at org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
        	at org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
        	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        
        Show
        ramkrishna.s.vasudevan added a comment - The test failure seems unrelated. java.lang.ArrayIndexOutOfBoundsException: 10 at java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:343) at org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224) at org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609) at org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303) at org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        Hide
        Hudson added a comment -

        FAILURE: Integrated in HBase-TRUNK #5051 (See https://builds.apache.org/job/HBase-TRUNK/5051/)
        HBASE-10531-Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo (Ram) (ramkrishna: rev 1583031)

        • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparator.java
        • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
        • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
        • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
        • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
        • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellComparator.java
        • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeSeeker.java
        • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArrayScanner.java
        • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeCell.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/BloomFilterFactory.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ByteBloomFilter.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompoundBloomFilter.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekToBlockWithEncoders.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
        Show
        Hudson added a comment - FAILURE: Integrated in HBase-TRUNK #5051 (See https://builds.apache.org/job/HBase-TRUNK/5051/ ) HBASE-10531 -Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo (Ram) (ramkrishna: rev 1583031) /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparator.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellComparator.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeSeeker.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArrayScanner.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeCell.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/BloomFilterFactory.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ByteBloomFilter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompoundBloomFilter.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekToBlockWithEncoders.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
        Hide
        ramkrishna.s.vasudevan added a comment -

        The test failure

        org.apache.hadoop.hbase.master.TestMasterFailover.testSimpleMasterFailover
        org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait.testBatchPut
        

        did not occur locally in my testruns and also in hadoopQA run. Also the subsequent runs does not have this failure. JFYI.

        Show
        ramkrishna.s.vasudevan added a comment - The test failure org.apache.hadoop.hbase.master.TestMasterFailover.testSimpleMasterFailover org.apache.hadoop.hbase.regionserver.TestHRegionBusyWait.testBatchPut did not occur locally in my testruns and also in hadoopQA run. Also the subsequent runs does not have this failure. JFYI.
        Hide
        Enis Soztutar added a comment -

        Closing this issue after 0.99.0 release.

        Show
        Enis Soztutar added a comment - Closing this issue after 0.99.0 release.
        Enis Soztutar made changes -
        Status Resolved [ 5 ] Closed [ 6 ]
        Transition Time In Source Status Execution Times Last Executer Last Execution Date
        Patch Available Patch Available Open Open
        20d 15h 49m 10 ramkrishna.s.vasudevan 29/Mar/14 06:57
        Open Open Patch Available Patch Available
        22d 5h 30m 11 ramkrishna.s.vasudevan 29/Mar/14 06:58
        Patch Available Patch Available Resolved Resolved
        10h 22m 1 ramkrishna.s.vasudevan 29/Mar/14 17:20
        Resolved Resolved Closed Closed
        329d 6h 10m 1 Enis Soztutar 21/Feb/15 23:30

          People

          • Assignee:
            ramkrishna.s.vasudevan
            Reporter:
            ramkrishna.s.vasudevan
          • Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development