HBase
  1. HBase
  2. HBASE-8496

Implement tags and the internals of how a tag should look like

    Details

    • Type: New Feature New Feature
    • Status: Resolved
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: 0.98.0, 0.95.2
    • Fix Version/s: 0.98.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Incompatible change, Reviewed
    • Release Note:
      Hide
      Tags are additional metadata to be added with the KVs.
      To enable the tags to be persisted in the HFiles, V3 version of HFile should be used.
      <property>
            <name>hfile.format.version</name>
            <value>3</value>
        </property>
      The tags has the below format
      <2 byte tag length><1 byte type code><tag>
      where <type> is the type of the tag, <tag> is a byte[] that has the tag data.
      To add Tags using Puts
      Put.add(byte[] family, byte [] qualifier, byte [] value, Tag[] tag)
      Put.add(byte[] family, byte[] qualifier, long ts, byte[] value, Tag[] tag)
      Can be used.
      Note that even after changing the version to V3, the no-tag case will also be working fine as in V2 format.
      Show
      Tags are additional metadata to be added with the KVs. To enable the tags to be persisted in the HFiles, V3 version of HFile should be used. <property>       <name>hfile.format.version</name>       <value>3</value>   </property> The tags has the below format <2 byte tag length><1 byte type code><tag> where <type> is the type of the tag, <tag> is a byte[] that has the tag data. To add Tags using Puts Put.add(byte[] family, byte [] qualifier, byte [] value, Tag[] tag) Put.add(byte[] family, byte[] qualifier, long ts, byte[] value, Tag[] tag) Can be used. Note that even after changing the version to V3, the no-tag case will also be working fine as in V2 format.

      Description

      The intent of this JIRA comes from HBASE-7897.
      This would help us to decide on the structure and format of how the tags should look like.

      1. Tag design.pdf
        118 kB
        ramkrishna.s.vasudevan
      2. Tag design_updated.pdf
        46 kB
        ramkrishna.s.vasudevan
      3. Tag_In_KV_Buffer_For_reference.patch
        303 kB
        ramkrishna.s.vasudevan
      4. Performance_report.xlsx
        28 kB
        ramkrishna.s.vasudevan
      5. HBASE-8496.patch
        337 kB
        ramkrishna.s.vasudevan
      6. HBASE-8496_6.patch
        582 kB
        ramkrishna.s.vasudevan
      7. HBASE-8496_5.patch
        582 kB
        ramkrishna.s.vasudevan
      8. HBASE-8496_4.patch
        579 kB
        ramkrishna.s.vasudevan
      9. HBASE-8496_3.patch
        579 kB
        ramkrishna.s.vasudevan
      10. HBASE-8496_3.patch
        579 kB
        ramkrishna.s.vasudevan
      11. HBASE-8496_3.patch
        579 kB
        ramkrishna.s.vasudevan
      12. HBASE-8496_2.patch
        336 kB
        ramkrishna.s.vasudevan
      13. Comparison.pdf
        50 kB
        ramkrishna.s.vasudevan

        Issue Links

          Activity

          Hide
          ramkrishna.s.vasudevan added a comment -

          Before i could start further discussion, what type of changes does one foresee in the current KeyValue structure to support Tags and how does it work in tandem with different Encoders available including NONE type.

          Show
          ramkrishna.s.vasudevan added a comment - Before i could start further discussion, what type of changes does one foresee in the current KeyValue structure to support Tags and how does it work in tandem with different Encoders available including NONE type.
          Hide
          Matt Corgan added a comment -

          As for existing encoders, I think someone will have to go back and add tag encoding support to them, else they'll be a black hole for tags. Tags will go in, but none will come out.

          I don't have any use cases in mind for tags, but to share what i was envisioning: the tags section could be composed of a sorted list of byte[]s of the form: [vint numTags][vint length0][byte[] tag0][vint lengthN][byte[] tagN].

          Should probably sort them at write time so it only has to happen once. Assume there will be >=1 reads so might as well sort when writing.

          If the basic structure above is followed, then encoders can use dictionary or trie style encoding for the tag section.

          Show
          Matt Corgan added a comment - As for existing encoders, I think someone will have to go back and add tag encoding support to them, else they'll be a black hole for tags. Tags will go in, but none will come out. I don't have any use cases in mind for tags, but to share what i was envisioning: the tags section could be composed of a sorted list of byte[]s of the form: [vint numTags] [vint length0] [byte[] tag0] [vint lengthN] [byte[] tagN]. Should probably sort them at write time so it only has to happen once. Assume there will be >=1 reads so might as well sort when writing. If the basic structure above is followed, then encoders can use dictionary or trie style encoding for the tag section.
          Hide
          ramkrishna.s.vasudevan added a comment -

          I have some patches ready for this. Before i could bring them for further discussion,
          ->From the client perspective the tags will now be added as part of Puts?
          Put.add() will now have an option to pass tag array. One more option that we thought of is to have OperationAttributes and set the tags over there. But we need some CPs to take care of this so that these tags set in OperationAttr can be added to the KVs of the Put.
          -> Tag will be an integral part of the KVs?
          -> A sort of new hfilereader and writer is needed to read them and write them to the block byte buffer

          What are your suggestions on the above? I would come up with patches sooner. Thanks all.

          Show
          ramkrishna.s.vasudevan added a comment - I have some patches ready for this. Before i could bring them for further discussion, ->From the client perspective the tags will now be added as part of Puts? Put.add() will now have an option to pass tag array. One more option that we thought of is to have OperationAttributes and set the tags over there. But we need some CPs to take care of this so that these tags set in OperationAttr can be added to the KVs of the Put. -> Tag will be an integral part of the KVs? -> A sort of new hfilereader and writer is needed to read them and write them to the block byte buffer What are your suggestions on the above? I would come up with patches sooner. Thanks all.
          Hide
          ramkrishna.s.vasudevan added a comment -

          The strucuture of tag may look like this

          <1 byte type code><2 byte tag length><tag>

          We need to provide some TagIterators inside the CellUtil so that we will be able to iterate the tag array.
          The Iterator must use the above tag structure to build this info.
          Other utility methods may also be needed for this like getNumTags(), given a type get the tags of that type etc.
          If we are having the structure with the type in it then it may not be possible to actually have some validation on the client side for specific tag types.
          The reason for having type is to have different usecases for tags and the CP that we add for the different usecase should help us in achieving it.

          We also need to identify different usecases for tags other than Visibility and ACLs so that we can ensure that we provide proper client support for tags. Currently the idea is to go with the CP based approach.
          From the client perspective the tags will now be added as part of Puts?
          Put.add(KeyValue) will now have an option to pass tag array. One more option that we thought of is to have OperationAttributes and set the tags over there.
          Tried out different options on getting Tags working with the KeyValues and existing formats.
          The KV can be modified to

          <keylength><valuelength><keyarray><valuearray><taglength><tagarray>

          So if a kv does not have any tags still the taglength will be 0 but there will not be any tag array.
          This will involve some changes in the format of the HFileWriter and reader probably a new version of the Writer/Reader is needed. (Minor should be enough?)
          Incase of encoders the base encoder BufferedDataEncoder will be tag aware and currently there is not encoding logic applies on the tag part. It is just written and parsed so that while scan we are able to get the tags in the output KVs.

          Similar applies for the PrefixTree codec. In this case the backward compatability should be taken care of.

          Incase we don't need to do the above one more thing that can be done is

          <Existing KV format><int – negative integer indicating the length of the tag><tag array>

          Here the negative length is used only when there is a tag and the existing KV format is left untouched when there is no tag.
          In this approach we would be every time reading the next KVs keylength and then decide if there is a tag presence or not. If not present we just rewind the position of the buffer.
          This has a performance impact but does not involve changes to the HFileFormats.

          So in both of the cases we tend to write the tag info whether or not user needs it. So one way to avoid it could be like the way we do for MemstoreTS.
          Add a meta data to the hfile saying tagpresent = true/false based on the KVs in that HFile.
          Even if there is only one KV with tag this meta data will be true.
          Now on compaction we will read this metadata and decide whether to compact data with Tag or without tag.
          The advantage is that for scenarios where there are no tags we will have not have a drop in read performance (this applies after compaction is done).
          The downside of this approach is that the KeyValue format itself now becomes 2 ways of representation. Sometimes the KV that we retrieve will have tag info sometimes will not be having tag.
          Thanks to Anoop and Andy for their suggestions/inputs.

          I have some patches ready for the above approaches except for that option tag part. Wanted to know if that can be provided as a feature in the future? anyway will try out the optional part also to see what type of changes/issues we may face while implementing it.
          Comments/feedback welcome. Anyother ideas am open to hear them also.

          Show
          ramkrishna.s.vasudevan added a comment - The strucuture of tag may look like this <1 byte type code><2 byte tag length><tag> We need to provide some TagIterators inside the CellUtil so that we will be able to iterate the tag array. The Iterator must use the above tag structure to build this info. Other utility methods may also be needed for this like getNumTags(), given a type get the tags of that type etc. If we are having the structure with the type in it then it may not be possible to actually have some validation on the client side for specific tag types. The reason for having type is to have different usecases for tags and the CP that we add for the different usecase should help us in achieving it. We also need to identify different usecases for tags other than Visibility and ACLs so that we can ensure that we provide proper client support for tags. Currently the idea is to go with the CP based approach. From the client perspective the tags will now be added as part of Puts? Put.add(KeyValue) will now have an option to pass tag array. One more option that we thought of is to have OperationAttributes and set the tags over there. Tried out different options on getting Tags working with the KeyValues and existing formats. The KV can be modified to <keylength><valuelength><keyarray><valuearray><taglength><tagarray> So if a kv does not have any tags still the taglength will be 0 but there will not be any tag array. This will involve some changes in the format of the HFileWriter and reader probably a new version of the Writer/Reader is needed. (Minor should be enough?) Incase of encoders the base encoder BufferedDataEncoder will be tag aware and currently there is not encoding logic applies on the tag part. It is just written and parsed so that while scan we are able to get the tags in the output KVs. Similar applies for the PrefixTree codec. In this case the backward compatability should be taken care of. Incase we don't need to do the above one more thing that can be done is <Existing KV format><int – negative integer indicating the length of the tag><tag array> Here the negative length is used only when there is a tag and the existing KV format is left untouched when there is no tag. In this approach we would be every time reading the next KVs keylength and then decide if there is a tag presence or not. If not present we just rewind the position of the buffer. This has a performance impact but does not involve changes to the HFileFormats. So in both of the cases we tend to write the tag info whether or not user needs it. So one way to avoid it could be like the way we do for MemstoreTS. Add a meta data to the hfile saying tagpresent = true/false based on the KVs in that HFile. Even if there is only one KV with tag this meta data will be true. Now on compaction we will read this metadata and decide whether to compact data with Tag or without tag. The advantage is that for scenarios where there are no tags we will have not have a drop in read performance (this applies after compaction is done). The downside of this approach is that the KeyValue format itself now becomes 2 ways of representation. Sometimes the KV that we retrieve will have tag info sometimes will not be having tag. Thanks to Anoop and Andy for their suggestions/inputs. I have some patches ready for the above approaches except for that option tag part. Wanted to know if that can be provided as a feature in the future? anyway will try out the optional part also to see what type of changes/issues we may face while implementing it. Comments/feedback welcome. Anyother ideas am open to hear them also.
          Hide
          stack added a comment -

          A few notes on above Ram.

          <1 byte type code><2 byte tag length><tag>

          You don't want to use a vint for length? Most tags will only need a single byte for length I'd imagine.

          Have we said what tags will actually look like? For instance for ACL and Visibility, what will they look like? Will a tag for Visibility be the Accumulo list of who can view?

          On the 1 byte type, I am not clear why.

          Would suggest too that you start up a one or two page design doc. Will help w/ review of design. Easier than trying to aggregate JIRA comments spread over multiple issues.

          Do you think we need tag iterators? Is that overkill since for the vast majority of cases there will be one tag or two at most? (I'd guess).

          Tags will be done by a CP? Does it have to be that way? Can tags not be first class beside timestamp, column qualifier, etc.

          If we are having the structure with the type in it then it may not be possible to actually have some validation on the client side for specific tag types.

          Pardon me, I do not follow the above.

          Is putting tags after value a good idea? We'll have to skip over the value to see if we should return the KV? Would be easier if the tag were part of the key?

          Sounds like we need to change the hfileformats (smile)?

          "option tag part" is optionally adding tags to hfile?

          Good on you Ram.

          Show
          stack added a comment - A few notes on above Ram. <1 byte type code><2 byte tag length><tag> You don't want to use a vint for length? Most tags will only need a single byte for length I'd imagine. Have we said what tags will actually look like? For instance for ACL and Visibility, what will they look like? Will a tag for Visibility be the Accumulo list of who can view? On the 1 byte type, I am not clear why. Would suggest too that you start up a one or two page design doc. Will help w/ review of design. Easier than trying to aggregate JIRA comments spread over multiple issues. Do you think we need tag iterators? Is that overkill since for the vast majority of cases there will be one tag or two at most? (I'd guess). Tags will be done by a CP? Does it have to be that way? Can tags not be first class beside timestamp, column qualifier, etc. If we are having the structure with the type in it then it may not be possible to actually have some validation on the client side for specific tag types. Pardon me, I do not follow the above. Is putting tags after value a good idea? We'll have to skip over the value to see if we should return the KV? Would be easier if the tag were part of the key? Sounds like we need to change the hfileformats (smile)? "option tag part" is optionally adding tags to hfile? Good on you Ram.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Would suggest too that you start up a one or two page design doc.

          Sure.

          You don't want to use a vint for length?

          I can do this.

          For instance for ACL and Visibility, what will they look like?

          ACL related tags will be something like UserTablePermissions saying which user has what type of permissions.

          new UserTablePermissions()
          +            .add(user, new TablePermission(Permission.Action.READ)))
          

          The above is an example from Andy's patch in HBASE-6222. Something like above will now be added as part of tags.
          Visibility tags are the ones that Accumulo supports. A set of valide ascii characters that define the labels and thus ensuring authorization to the users who pass those labels for reads.

          On the 1 byte type, I am not clear why.

          This is something to be looked into keenly. My idea was if i say ACL and Visibility are the two types of tags, i need to have different CPs to process them. So now to identify which CP should do what, the tag needs some categorization which is provided by the Type part. I will cover more on the doc that i attach over here.

          Do you think we need tag iterators?

          May be needed, will see if there is other way.

          Can tags not be first class beside timestamp, column qualifier, etc.

          Is putting tags after value a good idea?

          Oops, So you mean that tags needs to be part of the KV comparisions itself. Was thinking that would be a major change. So we need to sort KVs based on tags also?

          Sounds like we need to change the hfileformats (smile)?

          YEs ofcourse it needs change

          option tag part" is optionally adding tags to hfile?

          Ya. So if the user does not have any tags atleast the hfiles that gets created after compactions will be that of existing ones.
          And thanks for your reviews/comments Stack.

          Andrew Purtell/Anoop Sam John/Anoop Sam John
          you would like to pitch in with your ideas here?

          Show
          ramkrishna.s.vasudevan added a comment - Would suggest too that you start up a one or two page design doc. Sure. You don't want to use a vint for length? I can do this. For instance for ACL and Visibility, what will they look like? ACL related tags will be something like UserTablePermissions saying which user has what type of permissions. new UserTablePermissions() + .add(user, new TablePermission(Permission.Action.READ))) The above is an example from Andy's patch in HBASE-6222 . Something like above will now be added as part of tags. Visibility tags are the ones that Accumulo supports. A set of valide ascii characters that define the labels and thus ensuring authorization to the users who pass those labels for reads. On the 1 byte type, I am not clear why. This is something to be looked into keenly. My idea was if i say ACL and Visibility are the two types of tags, i need to have different CPs to process them. So now to identify which CP should do what, the tag needs some categorization which is provided by the Type part. I will cover more on the doc that i attach over here. Do you think we need tag iterators? May be needed, will see if there is other way. Can tags not be first class beside timestamp, column qualifier, etc. Is putting tags after value a good idea? Oops, So you mean that tags needs to be part of the KV comparisions itself. Was thinking that would be a major change. So we need to sort KVs based on tags also? Sounds like we need to change the hfileformats (smile)? YEs ofcourse it needs change option tag part" is optionally adding tags to hfile? Ya. So if the user does not have any tags atleast the hfiles that gets created after compactions will be that of existing ones. And thanks for your reviews/comments Stack. Andrew Purtell / Anoop Sam John / Anoop Sam John you would like to pitch in with your ideas here?
          Hide
          Andrew Purtell added a comment -

          Is putting tags after value a good idea?

          Oops, So you mean that tags needs to be part of the KV comparisions itself. Was thinking that would be a major change. So we need to sort KVs based on tags also?

          I would strongly caution against this. Tags will be arbitrary metadata and won't have anything to do with locating the value itself.

          On the other hand, some use cases may want it, see https://issues.apache.org/jira/browse/HBASE-6222?focusedCommentId=13396190&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13396190

          I don't have a good answer for resolving this conflict.

          We may just want to live with the constraints imposed by not having tags be part of the key. This would significantly simplify implementation.

          Show
          Andrew Purtell added a comment - Is putting tags after value a good idea? Oops, So you mean that tags needs to be part of the KV comparisions itself. Was thinking that would be a major change. So we need to sort KVs based on tags also? I would strongly caution against this. Tags will be arbitrary metadata and won't have anything to do with locating the value itself. On the other hand, some use cases may want it, see https://issues.apache.org/jira/browse/HBASE-6222?focusedCommentId=13396190&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13396190 I don't have a good answer for resolving this conflict. We may just want to live with the constraints imposed by not having tags be part of the key. This would significantly simplify implementation.
          Hide
          Jeffrey Zhong added a comment -

          I want to post a following user case to see if current design can accommodate or can easily be extended to support it.

          In high level, we'd like to have support of optional system tags. Basically HBase internals can tag a KV with a value and consume these tags without using co-processors. These tags are invisible to end users(application end users).

          A possible user case(just for example as hbase-8701 may go different approach.) is for hbase-8701 where we can tag a KV with sequence number value during log replay and later these tags are removable by a major compaction.

          Thanks.

          Show
          Jeffrey Zhong added a comment - I want to post a following user case to see if current design can accommodate or can easily be extended to support it. In high level, we'd like to have support of optional system tags. Basically HBase internals can tag a KV with a value and consume these tags without using co-processors. These tags are invisible to end users(application end users). A possible user case(just for example as hbase-8701 may go different approach.) is for hbase-8701 where we can tag a KV with sequence number value during log replay and later these tags are removable by a major compaction. Thanks.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Jeffrey Zhong

          Basically HBase internals can tag a KV with a value and consume these tags without using co-processors.

          The HBase internals will know about the tags and the read and write path would know how to deal with them. But they will just read the tag part of the KV from the bytebuffer and return back as KVs with or without tags as per that individual KV.
          If we need to take specific action for the system tags on the read path, then we need to add the behaviour inside the HBase read path. Currently this would be handled by CPs based on the type of the tags.
          So may be once i attach a first cut version of the patch you would be able to understand better.

          Show
          ramkrishna.s.vasudevan added a comment - Jeffrey Zhong Basically HBase internals can tag a KV with a value and consume these tags without using co-processors. The HBase internals will know about the tags and the read and write path would know how to deal with them. But they will just read the tag part of the KV from the bytebuffer and return back as KVs with or without tags as per that individual KV. If we need to take specific action for the system tags on the read path, then we need to add the behaviour inside the HBase read path. Currently this would be handled by CPs based on the type of the tags. So may be once i attach a first cut version of the patch you would be able to understand better.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Attaching a simple design document that says how tags will be supported by HBase and the advantage of using KeyValuecodec.
          It also touches on the way how tags can be implemented in an optional way when we don't go with KeyValuecodec. Pls feel free to share your comments/reviews. Thanks to Andy and Anoop for their reviews/suggestions.

          Show
          ramkrishna.s.vasudevan added a comment - Attaching a simple design document that says how tags will be supported by HBase and the advantage of using KeyValuecodec. It also touches on the way how tags can be implemented in an optional way when we don't go with KeyValuecodec. Pls feel free to share your comments/reviews. Thanks to Andy and Anoop for their reviews/suggestions.
          Hide
          Ted Yu added a comment -

          On page 6:

          In case of per HFile

          The sentence seems to be incomplete.

          once we close the file we add the Meta data saying tagpresent = true and avg_tag_len = 0.

          avg_tag_len = 0 would indicate that there is no tag present. Why do we need two flags (tagpresent and avg_tag_len) ?
          Later compaction is mentioned where tagpresent is changed to false. But we should be able to achieve this at the time of flush, right ?

          byte[] tagArray = kv.getTagsArray();
          Tag decodeTag = KeyValueUtil.decodeTag(tagArray);
          

          In the above sample, I would expect decodeTag() to return more than one Tag.
          Would all Tags in the KeyValue be returned to filterKeyValue() ? I think it would be better if Tag.Type.Visibility is passed to decodeTag() so that only visibility Tag is returned.

          Show
          Ted Yu added a comment - On page 6: In case of per HFile The sentence seems to be incomplete. once we close the file we add the Meta data saying tagpresent = true and avg_tag_len = 0. avg_tag_len = 0 would indicate that there is no tag present. Why do we need two flags (tagpresent and avg_tag_len) ? Later compaction is mentioned where tagpresent is changed to false. But we should be able to achieve this at the time of flush, right ? byte [] tagArray = kv.getTagsArray(); Tag decodeTag = KeyValueUtil.decodeTag(tagArray); In the above sample, I would expect decodeTag() to return more than one Tag. Would all Tags in the KeyValue be returned to filterKeyValue() ? I think it would be better if Tag.Type.Visibility is passed to decodeTag() so that only visibility Tag is returned.
          Hide
          ramkrishna.s.vasudevan added a comment -

          avg_tag_len = 0 would indicate that there is no tag present. Why do we need two flags (tagpresent and avg_tag_len) ?

          When we don't use the keyvaluecodec approach, when i flush the memstore i would be getting every KV and then writing it into blocks. So in one flush i can have only one KV with tag and all others without tag. So i cannot make a decision on the presence of tags before i could complete one Hfileblock.
          So while flush happens i would always write the taglength part of the kv but there may not be any tags in that block. So inorder to decode this block when i read i should have an indicator that i have written this block with the taglength (tagpresent flag) but the avg_tag_len would indicate whether i need not read the 4 byte INT but just skip the 4 bytes and reposition the buffer.

          Later compaction is mentioned where tagpresent is changed to false. But we should be able to achieve this at the time of flush, right ?

          Always flush would right the taglength even if tags are not present. when the same HFileblock is being read for compaction i would just use the above logic and avoid writing the even taglength part in the compacted file and so now in this compacted file the hfileblock would have tagpresent=false and avg_tag_len=0. Pls note that in the HFileblock level this would be two individual bytes - 1 indicates true and 0 indicates false.

          In the above sample, I would expect decodeTag() to return more than one Tag.

          Yes Ted you are right. In the example that i attached there there was only one tag. Ideally we would be using like this

              Iterator<byte[]> tagIterator = CellUtil.getTagIterator(tagArray);
              List<Tag> tagList = new ArrayList<Tag>();
              while (tagIterator.hasNext()) {
                byte[] tag = tagIterator.next();
                Tag t =(KeyValueUtil.decodeTag(tag));
              }
          

          I think it would be better if Tag.Type.Visibility is passed to decodeTag() so that only visibility Tag is returned.

          We can have one such method so that decodeTag would only return a KV if the specified type of tag is present.

          Would all Tags in the KeyValue be returned to filterKeyValue() ?

          If you see in terms of visibility/ACL tags if a user is authorised to read that KV then returning the KV with tags should be fine i feel. We can discuss on this.
          I am working on the performance reports on using KeyValueCodec. Will share more info on that soon.
          Thanks for your reviews.

          Show
          ramkrishna.s.vasudevan added a comment - avg_tag_len = 0 would indicate that there is no tag present. Why do we need two flags (tagpresent and avg_tag_len) ? When we don't use the keyvaluecodec approach, when i flush the memstore i would be getting every KV and then writing it into blocks. So in one flush i can have only one KV with tag and all others without tag. So i cannot make a decision on the presence of tags before i could complete one Hfileblock. So while flush happens i would always write the taglength part of the kv but there may not be any tags in that block. So inorder to decode this block when i read i should have an indicator that i have written this block with the taglength (tagpresent flag) but the avg_tag_len would indicate whether i need not read the 4 byte INT but just skip the 4 bytes and reposition the buffer. Later compaction is mentioned where tagpresent is changed to false. But we should be able to achieve this at the time of flush, right ? Always flush would right the taglength even if tags are not present. when the same HFileblock is being read for compaction i would just use the above logic and avoid writing the even taglength part in the compacted file and so now in this compacted file the hfileblock would have tagpresent=false and avg_tag_len=0. Pls note that in the HFileblock level this would be two individual bytes - 1 indicates true and 0 indicates false. In the above sample, I would expect decodeTag() to return more than one Tag. Yes Ted you are right. In the example that i attached there there was only one tag. Ideally we would be using like this Iterator< byte []> tagIterator = CellUtil.getTagIterator(tagArray); List<Tag> tagList = new ArrayList<Tag>(); while (tagIterator.hasNext()) { byte [] tag = tagIterator.next(); Tag t =(KeyValueUtil.decodeTag(tag)); } I think it would be better if Tag.Type.Visibility is passed to decodeTag() so that only visibility Tag is returned. We can have one such method so that decodeTag would only return a KV if the specified type of tag is present. Would all Tags in the KeyValue be returned to filterKeyValue() ? If you see in terms of visibility/ACL tags if a user is authorised to read that KV then returning the KV with tags should be fine i feel. We can discuss on this. I am working on the performance reports on using KeyValueCodec. Will share more info on that soon. Thanks for your reviews.
          Hide
          Matt Corgan added a comment -

          A quick comment on encoding the tag lengths - I would recommend using a VInt rather than fixed 2 bytes. Most tags will be < 128 or 256 bytes (depending on which VInt you choose), so will fit nicely into 1 byte. A 1 byte VInt is pretty trivial to decode, maybe as easy or easier than 2 bytes.

          Show
          Matt Corgan added a comment - A quick comment on encoding the tag lengths - I would recommend using a VInt rather than fixed 2 bytes. Most tags will be < 128 or 256 bytes (depending on which VInt you choose), so will fit nicely into 1 byte. A 1 byte VInt is pretty trivial to decode, maybe as easy or easier than 2 bytes.
          Hide
          stack added a comment -

          Add author, date, and JIRA you are referring to.

          Tags are going to be handled by cps and not be native? Or is thought that we do them as CP first and then pull them native?

          Why does hfile care about tags? It is not doing interpretation of the Cells it has?

          You don't want to add addTags to Mutation so you can do Put#addTags? We can do this later I suppose.

          If tags as attribute, maybe have a special key, something like, tags.visibility... so know which attributes are for inclusion in tags.

          Should you make a TaggedKeyValueCodec or you think it fine just making KeyValueCodec handle tags if present?

          So, pass a cellblock to hfile and then add codec metadata to hfile metadata? Hmm... I suppose using a codec breaks when you get to hfile because you have to do a kv at a time, unless you can do append(Codec). Besides, hfile is broke at the moment in that it only knows one kind of kv serialization and it is baked in everwhere. How you propose we fix this? Can we move hfile to be Cell based?

          Is it good adding tags at end of kv? I suppose it is fine for now getting them in. We can do an alternate serialization later, one that keeps tags close so we can spin through them fast w/o having to read value data.

          "With KeyValuecodecs in place now we would issue two reads to read the total KV size and then read the entire byte array and then form the KVs." – it has to be two reads? COuld we not write the block as a KeyValueCodec? And then in hfile metadata say what Codec class writing blocks was?

          We need to change hfile so it does Cells. HFileBlockEncoders also are broke in that they presume the KV serialization format. HFileBlockEncoders should be like the new Cell Codecs.

          You say "All the encoders will need to understand the tags." Would be best for all if hfiles understood Cells. When you say additional 'read', do you mean a new seek?

          "...but we would not be able to use the KeyValueCodec.decoder() on the HFileBlocks." can we not make this happen? HFileBlockEncoders are broke.

          "This would involve changes to the HFileBlockHeader." Could hfile blocks be cellblocks? And in meta data for hfile say what decoder to use?

          Thanks Ram.

          Show
          stack added a comment - Add author, date, and JIRA you are referring to. Tags are going to be handled by cps and not be native? Or is thought that we do them as CP first and then pull them native? Why does hfile care about tags? It is not doing interpretation of the Cells it has? You don't want to add addTags to Mutation so you can do Put#addTags? We can do this later I suppose. If tags as attribute, maybe have a special key, something like, tags.visibility... so know which attributes are for inclusion in tags. Should you make a TaggedKeyValueCodec or you think it fine just making KeyValueCodec handle tags if present? So, pass a cellblock to hfile and then add codec metadata to hfile metadata? Hmm... I suppose using a codec breaks when you get to hfile because you have to do a kv at a time, unless you can do append(Codec). Besides, hfile is broke at the moment in that it only knows one kind of kv serialization and it is baked in everwhere. How you propose we fix this? Can we move hfile to be Cell based? Is it good adding tags at end of kv? I suppose it is fine for now getting them in. We can do an alternate serialization later, one that keeps tags close so we can spin through them fast w/o having to read value data. "With KeyValuecodecs in place now we would issue two reads to read the total KV size and then read the entire byte array and then form the KVs." – it has to be two reads? COuld we not write the block as a KeyValueCodec? And then in hfile metadata say what Codec class writing blocks was? We need to change hfile so it does Cells. HFileBlockEncoders also are broke in that they presume the KV serialization format. HFileBlockEncoders should be like the new Cell Codecs. You say "All the encoders will need to understand the tags." Would be best for all if hfiles understood Cells. When you say additional 'read', do you mean a new seek? "...but we would not be able to use the KeyValueCodec.decoder() on the HFileBlocks." can we not make this happen? HFileBlockEncoders are broke. "This would involve changes to the HFileBlockHeader." Could hfile blocks be cellblocks? And in meta data for hfile say what decoder to use? Thanks Ram.
          Hide
          stack added a comment -

          I suppose bottom-line, can we not just do Cells rather than KV-with-tags? The same problems have to be solved (basically). If Cells, you'll get your tags carried along for you. Your CPs could then exploit them when present. The serialization could be your proposed KV-with-tags-on-the-end but when we touch hfile, can we pass cellblocks and read cellblocks (encoding and/or compression could be metadata in hfile – I suppose this means one encoding/compression per file which is probably fine. Compressed/encoded blocks in memory would have to be accessed w/ CellScanner... would need resettable CellScanner though I suppose.

          Show
          stack added a comment - I suppose bottom-line, can we not just do Cells rather than KV-with-tags? The same problems have to be solved (basically). If Cells, you'll get your tags carried along for you. Your CPs could then exploit them when present. The serialization could be your proposed KV-with-tags-on-the-end but when we touch hfile, can we pass cellblocks and read cellblocks (encoding and/or compression could be metadata in hfile – I suppose this means one encoding/compression per file which is probably fine. Compressed/encoded blocks in memory would have to be accessed w/ CellScanner... would need resettable CellScanner though I suppose.
          Hide
          Andrew Purtell added a comment -

          My read is tags are in core but the example users are coprocessors. Except for the operation attribute part, but that's just the other side of the story: if a cp is consuming tags presumably it is setting/managing them too.

          To what extent are we comfortable changing the client API to be tags aware? In 0.94 too?

          Could hfile blocks be cellblocks? And in meta data for hfile say what decoder to use?

          I think it makes sense to have one codec for cells, or transitionally cells as two distinct serializers - keyvalue+tags, leaving the keyvalue class alone for the most part. We have keyvaluecodec in both trunk/0.95 and 0.94. Right now its only used for WALs. Should extend this to use for HFile blocks too - one common codec for keyvalue/cell serialization dealing with stuff like tags. So agreed hfile block encoding should change at least to incorporate this new common interface. If we have leeway to change how HFile blocks are constructed, maybe we can bump HFile minor and try more than just cell by cell. Could pack tags together at the block level (and even dedup/share) if we can go this far with the kind of changes people would be comfortable with in trunk and 0.94.

          Show
          Andrew Purtell added a comment - My read is tags are in core but the example users are coprocessors. Except for the operation attribute part, but that's just the other side of the story: if a cp is consuming tags presumably it is setting/managing them too. To what extent are we comfortable changing the client API to be tags aware? In 0.94 too? Could hfile blocks be cellblocks? And in meta data for hfile say what decoder to use? I think it makes sense to have one codec for cells, or transitionally cells as two distinct serializers - keyvalue+tags, leaving the keyvalue class alone for the most part. We have keyvaluecodec in both trunk/0.95 and 0.94. Right now its only used for WALs. Should extend this to use for HFile blocks too - one common codec for keyvalue/cell serialization dealing with stuff like tags. So agreed hfile block encoding should change at least to incorporate this new common interface. If we have leeway to change how HFile blocks are constructed, maybe we can bump HFile minor and try more than just cell by cell. Could pack tags together at the block level (and even dedup/share) if we can go this far with the kind of changes people would be comfortable with in trunk and 0.94.
          Hide
          stack added a comment -

          You fellas want this to work on 0.94?

          Otherwise, hfilev3! Go for it!

          Show
          stack added a comment - You fellas want this to work on 0.94? Otherwise, hfilev3! Go for it!
          Hide
          Andrew Purtell added a comment -

          You fellas want this to work on 0.94?

          Yes, in the sense for cell ACLs, visibility labels, and such we would like to share as much as possible between 0.94 (prod) and 0.95+ (future)

          Show
          Andrew Purtell added a comment - You fellas want this to work on 0.94? Yes, in the sense for cell ACLs, visibility labels, and such we would like to share as much as possible between 0.94 (prod) and 0.95+ (future)
          Hide
          ramkrishna.s.vasudevan added a comment -

          Add author, date, and JIRA you are referring to.

          Will do.

          Should you make a TaggedKeyValueCodec or you think it fine just making KeyValueCodec handle tags if present?

          I think KeyValueCodec will server the purpose because KVCodec does not mind what is the internal byte structure so with or without tag is should be able to handle. What you feel?

          If tags as attribute, maybe have a special key, something like, tags.visibility.

          A special constant you mean? Should be fine. Mine just showed an example we can make that key something like a keyword.

          Besides, hfile is broke at the moment in that it only knows one kind of kv serialization and it is baked in everwhere. How you propose we fix this? Can we move hfile to be Cell based?

          HFile to cell based should be the best choice but converting the current hfile blocks to cell based blocks i need to check the changes. Let me give a try at it.

          We can do an alternate serialization later, one that keeps tags close so we can spin through them fast w/o having to read value data.

          Making alternate serialization later may involve changes to the existing comparators i feel, may be with Cell based this change should be simple i think.

          HFileBlockEncoders also are broke in that they presume the KV serialization format. HFileBlockEncoders should be like the new Cell Codecs.

          Yes. Everywhere the code just tries to go with KV format.

          When you say additional 'read', do you mean a new seek?

          No i meant the read issued on the byte buffers.

          Could hfile blocks be cellblocks? And in meta data for hfile say what decoder to use?

          If Cells, you'll get your tags carried along for you. Your CPs could then exploit them when present. The serialization could be your proposed KV-with-tags-on-the-end but when we touch hfile, can we pass cellblocks and read cellblocks (encoding and/or compression could be metadata in hfile – I suppose this means one encoding/compression per file which is probably fine. Compressed/encoded blocks in memory would have to be accessed w/ CellScanner... would need resettable CellScanner though I suppose.

          As said above will check this also. May be i can comment on this based on the analysis that i do in next couple of days and get back on this.

          If we try to use KeyValueCodec and try to use it per Cellblock then the KeyValueCodec.decoder() should work on the Cellblock but rather KeyValueCodec does not help much in the decoding time.

          The performance results with KVCodec seemed to be performing less than the other prototypes that i tried out.

          Show
          ramkrishna.s.vasudevan added a comment - Add author, date, and JIRA you are referring to. Will do. Should you make a TaggedKeyValueCodec or you think it fine just making KeyValueCodec handle tags if present? I think KeyValueCodec will server the purpose because KVCodec does not mind what is the internal byte structure so with or without tag is should be able to handle. What you feel? If tags as attribute, maybe have a special key, something like, tags.visibility. A special constant you mean? Should be fine. Mine just showed an example we can make that key something like a keyword. Besides, hfile is broke at the moment in that it only knows one kind of kv serialization and it is baked in everwhere. How you propose we fix this? Can we move hfile to be Cell based? HFile to cell based should be the best choice but converting the current hfile blocks to cell based blocks i need to check the changes. Let me give a try at it. We can do an alternate serialization later, one that keeps tags close so we can spin through them fast w/o having to read value data. Making alternate serialization later may involve changes to the existing comparators i feel, may be with Cell based this change should be simple i think. HFileBlockEncoders also are broke in that they presume the KV serialization format. HFileBlockEncoders should be like the new Cell Codecs. Yes. Everywhere the code just tries to go with KV format. When you say additional 'read', do you mean a new seek? No i meant the read issued on the byte buffers. Could hfile blocks be cellblocks? And in meta data for hfile say what decoder to use? If Cells, you'll get your tags carried along for you. Your CPs could then exploit them when present. The serialization could be your proposed KV-with-tags-on-the-end but when we touch hfile, can we pass cellblocks and read cellblocks (encoding and/or compression could be metadata in hfile – I suppose this means one encoding/compression per file which is probably fine. Compressed/encoded blocks in memory would have to be accessed w/ CellScanner... would need resettable CellScanner though I suppose. As said above will check this also. May be i can comment on this based on the analysis that i do in next couple of days and get back on this. If we try to use KeyValueCodec and try to use it per Cellblock then the KeyValueCodec.decoder() should work on the Cellblock but rather KeyValueCodec does not help much in the decoding time. The performance results with KVCodec seemed to be performing less than the other prototypes that i tried out.
          Hide
          Anoop Sam John added a comment -

          A quick comment on encoding the tag lengths - I would recommend using a VInt rather than fixed 2 bytes

          Yes Matt, we have discussed abt that also.. I think have not done it yet in any of the poc

          Show
          Anoop Sam John added a comment - A quick comment on encoding the tag lengths - I would recommend using a VInt rather than fixed 2 bytes Yes Matt, we have discussed abt that also.. I think have not done it yet in any of the poc
          Hide
          ramkrishna.s.vasudevan added a comment -

          @Stack/Andy
          We could create the CellBlock decoder that reads Cellblocks but the decoder would be able to operate on the ByteBuffer and not on the InputStream directly as how it is designed now.
          Ideally we would be reading from DFS using IOUtils.readFully and have bytebuffer filled up on which we can have a new CellDecoder that has the resettable property also.
          Did you mean this way Stack? This argument is same as what i have described above wrt KeyValueCodec.

          Show
          ramkrishna.s.vasudevan added a comment - @Stack/Andy We could create the CellBlock decoder that reads Cellblocks but the decoder would be able to operate on the ByteBuffer and not on the InputStream directly as how it is designed now. Ideally we would be reading from DFS using IOUtils.readFully and have bytebuffer filled up on which we can have a new CellDecoder that has the resettable property also. Did you mean this way Stack? This argument is same as what i have described above wrt KeyValueCodec.
          Hide
          stack added a comment -

          I think KeyValueCodec will server the purpose because KVCodec does not mind what is the internal byte structure so with or without tag is should be able to handle. What you feel?

          Ok.

          Ideally we would be reading from DFS using IOUtils.readFully and have bytebuffer filled up on which we can have a new CellDecoder that has the resettable property also.

          Is ByteBuffer required Ram? IOUtils.readFully fills a byte array?

          Show
          stack added a comment - I think KeyValueCodec will server the purpose because KVCodec does not mind what is the internal byte structure so with or without tag is should be able to handle. What you feel? Ok. Ideally we would be reading from DFS using IOUtils.readFully and have bytebuffer filled up on which we can have a new CellDecoder that has the resettable property also. Is ByteBuffer required Ram? IOUtils.readFully fills a byte array?
          Hide
          ramkrishna.s.vasudevan added a comment -

          I would like to get your suggestions on this.
          -> Ideal soln would be to use the codecs to work with Tags. But the current apis in Codec are more suitable for WAL and RPC but not for HFile level implementation. Atleast they are not straightforward. The HFile scan involves positional seek and we tend not to read KV by KV but most of the time we try to keep positioning the byte buffer and form the KVs.
          Usage of codec would not allow us to do this because doing a Codec.advance() would allow us to advance per KV. Also this type of advance() is not suitable for positional Seek. Hence it either makes us to introduce apis like previous() or backward() (for seekbefore)in the CellScanner or Create a Seeker interface in the Decoder (like the Seeker in DatablockEncoders) and implement the postional seeks in it. Doing this type of positional seek in the Util classes(discussion with Stack) am bit reluctant on this.
          -> Another problem when we use Codec would be with the DatablockEncoders. How does the DatablockEncoders work now is
          HFileWriter->append(kv) > form Hfileblock byte buffer>Encoders read the bytebuffer-> Encode per kv into new bytebuffer-> The new bytebuffer is persisted.
          Read flow
          ==========
          Read the encoded byte buffer-> The Seekers in the DatablockEncoders decode the bytebuffer to form the actual bytebuffer

          When we try to use Codec we may want to modify this as
          HFileWriter->Codec.encode(kv)> form hfileblock byte buffer -> Codec.decode(Bytebuffer) form KVs> Encode per KV into new byte buffer > [If this KV has tag we may need to again have an encoder here for tags]> The new bytebuffer is persisted (1)
          Read flow
          ========
          REad the encoded byte buffer -> The Seekers in the DatablockEncoders decode the bytebuffer to form the actual bytebuffer.

          One thing to be noted is that we may have to rewrite all the Encoding algo to work with Tags by either subclassing the actual ones or rewriting new ones. Now how can this decision be made? Here again we have few options options
          -> If user has tags add new Encoding Algos to the DataBlockEncoding enum like PRefixKeyDeltaEncodingWithTags, FastDiffkeyEncodingWithTags etc. and when we ever we see that the codec used for hfile has the ability to understand tags we just use the new Algos.
          -> The other way could be let internally the code instantiate the new classes and work with them to use the Tags also. But this would involve changes in the code with some if/else checks and this would apply for every algorithm. Tomorrow if a new codec is added then we may have to keep doing this.
          -> Another thing that Anoop suggested was, have a new HFileCodec internally it will be having the HFileCompressedEncoder. And every time you add a new type of codec it is upto the user to implement the Prefixkey, Fastdiff, Diffkey, PrefixTree to work with that codec.

          One more thing would be to change the way DAtablockEncoders work. As you can see in [1] since the blockencoders work on the Hfileblocks we are not able to make the most of the codec way of encoding and decoding. So we could make it work on per KV in the sense
          HFileWriter->append(kv)>Codec.encode(kv)> Create the encoded buffer-> Fill in the buffer till the block size is reached.

          As you can see all the above changes are like having an impact on the core code and we need good amout of changes to do this. Considering the effort on 0.96 this would be a major effort.
          One suggestion that we would like to make is and also reading Stack's earlier comment HfileV3 would be a viable soln.
          So HFileV3 would be the one which would know about the Tags and the read and write path in HFileV3 would understand tags. This would also mean that the datablockencoder code path will have some ugly if/else checks to handle the code flow with and without Tags (or something similar). I think this would make us have Tag support in 0.96 code base and the same could be changed based on discussion in community and bring about the changes for 0.98 with codec and also make the code talk in terms of Cells.
          I can raise a discussion/voting on the dev list for this. It would be great if we can come up with a consensus on this.

          Show
          ramkrishna.s.vasudevan added a comment - I would like to get your suggestions on this. -> Ideal soln would be to use the codecs to work with Tags. But the current apis in Codec are more suitable for WAL and RPC but not for HFile level implementation. Atleast they are not straightforward. The HFile scan involves positional seek and we tend not to read KV by KV but most of the time we try to keep positioning the byte buffer and form the KVs. Usage of codec would not allow us to do this because doing a Codec.advance() would allow us to advance per KV. Also this type of advance() is not suitable for positional Seek. Hence it either makes us to introduce apis like previous() or backward() (for seekbefore)in the CellScanner or Create a Seeker interface in the Decoder (like the Seeker in DatablockEncoders) and implement the postional seeks in it. Doing this type of positional seek in the Util classes(discussion with Stack) am bit reluctant on this. -> Another problem when we use Codec would be with the DatablockEncoders. How does the DatablockEncoders work now is HFileWriter->append(kv) > form Hfileblock byte buffer >Encoders read the bytebuffer-> Encode per kv into new bytebuffer-> The new bytebuffer is persisted. Read flow ========== Read the encoded byte buffer-> The Seekers in the DatablockEncoders decode the bytebuffer to form the actual bytebuffer When we try to use Codec we may want to modify this as HFileWriter->Codec.encode(kv) > form hfileblock byte buffer -> Codec.decode(Bytebuffer) form KVs > Encode per KV into new byte buffer > [If this KV has tag we may need to again have an encoder here for tags] > The new bytebuffer is persisted (1) Read flow ======== REad the encoded byte buffer -> The Seekers in the DatablockEncoders decode the bytebuffer to form the actual bytebuffer. One thing to be noted is that we may have to rewrite all the Encoding algo to work with Tags by either subclassing the actual ones or rewriting new ones. Now how can this decision be made? Here again we have few options options -> If user has tags add new Encoding Algos to the DataBlockEncoding enum like PRefixKeyDeltaEncodingWithTags, FastDiffkeyEncodingWithTags etc. and when we ever we see that the codec used for hfile has the ability to understand tags we just use the new Algos. -> The other way could be let internally the code instantiate the new classes and work with them to use the Tags also. But this would involve changes in the code with some if/else checks and this would apply for every algorithm. Tomorrow if a new codec is added then we may have to keep doing this. -> Another thing that Anoop suggested was, have a new HFileCodec internally it will be having the HFileCompressedEncoder. And every time you add a new type of codec it is upto the user to implement the Prefixkey, Fastdiff, Diffkey, PrefixTree to work with that codec. One more thing would be to change the way DAtablockEncoders work. As you can see in [1] since the blockencoders work on the Hfileblocks we are not able to make the most of the codec way of encoding and decoding. So we could make it work on per KV in the sense HFileWriter->append(kv) >Codec.encode(kv) > Create the encoded buffer-> Fill in the buffer till the block size is reached. As you can see all the above changes are like having an impact on the core code and we need good amout of changes to do this. Considering the effort on 0.96 this would be a major effort. One suggestion that we would like to make is and also reading Stack's earlier comment HfileV3 would be a viable soln. So HFileV3 would be the one which would know about the Tags and the read and write path in HFileV3 would understand tags. This would also mean that the datablockencoder code path will have some ugly if/else checks to handle the code flow with and without Tags (or something similar). I think this would make us have Tag support in 0.96 code base and the same could be changed based on discussion in community and bring about the changes for 0.98 with codec and also make the code talk in terms of Cells. I can raise a discussion/voting on the dev list for this. It would be great if we can come up with a consensus on this.
          Hide
          Andrew Purtell added a comment -

          So HFileV3 would be the one which would know about the Tags and the read and write path in HFileV3 would understand tags.

          What I especially like about this is any performance or functional risks which might possibly be introduced by the tags work is completely optional to take on this way - just don't select HFileV3. We can even mark it as experimental until 0.98.

          I have been able to observe Ram's work over the past couple of months trying out various other approaches just shy of introducing a new file format. It's not a decision come to overnight.

          +1

          Show
          Andrew Purtell added a comment - So HFileV3 would be the one which would know about the Tags and the read and write path in HFileV3 would understand tags. What I especially like about this is any performance or functional risks which might possibly be introduced by the tags work is completely optional to take on this way - just don't select HFileV3. We can even mark it as experimental until 0.98. I have been able to observe Ram's work over the past couple of months trying out various other approaches just shy of introducing a new file format. It's not a decision come to overnight. +1
          Hide
          ramkrishna.s.vasudevan added a comment -

          Thinking on the compatibility issues once we have Codec to work with HFiles,

          If we introduce V3 without changing the HFileBlock headers it would be easy for us to make the Codec to work in trunk.
          As the plan is to write the Codec used in the HFile header, for those files written using existing HFileV2 and the HFileV3(tags) we would need to read the Hfile header and as we come to know that there is no codec used in these hfiles, find out the major version of these files.
          If it is V2 instantiate a default codec that works with KVs(without tags) and if the major version is V3 instantiate a default codec that works with KVs(with tags). By this way we solve the problem of compatibility.
          Later V2 and V3 would ideally become the same code and we can decide on the version later.

          Ideally speaking, the codec way of encoding and decoding the KVs is the best way of doing things. But because of all said above we need a simpler and viable solution and that is why we would like to move ahead with HFile V3
          This V3 way of implementation can make the Tags as an optional feature and so not risky.
          Considering the above facts we would like to propose that going with HFileV3 would help us have tags in 0.96.

          Show
          ramkrishna.s.vasudevan added a comment - Thinking on the compatibility issues once we have Codec to work with HFiles, If we introduce V3 without changing the HFileBlock headers it would be easy for us to make the Codec to work in trunk. As the plan is to write the Codec used in the HFile header, for those files written using existing HFileV2 and the HFileV3(tags) we would need to read the Hfile header and as we come to know that there is no codec used in these hfiles, find out the major version of these files. If it is V2 instantiate a default codec that works with KVs(without tags) and if the major version is V3 instantiate a default codec that works with KVs(with tags). By this way we solve the problem of compatibility. Later V2 and V3 would ideally become the same code and we can decide on the version later. Ideally speaking, the codec way of encoding and decoding the KVs is the best way of doing things. But because of all said above we need a simpler and viable solution and that is why we would like to move ahead with HFile V3 This V3 way of implementation can make the Tags as an optional feature and so not risky. Considering the above facts we would like to propose that going with HFileV3 would help us have tags in 0.96.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Patch that work with HFile V3 and makes Tags in memory to the KV. The attached design doc is about having Tags in the KV byte array.
          We had tried both the options and the attached patch is with inmemory Tags rather than appending to the KV byte array. There are few pros and cons with both the approaches. The support for PerfEval and LoadTestTool will attach in a seperate JIRA.

          Many thanks to Anoop for his quality ideas/contributions and review.
          Thanks to Andy for his inputs for the prototypes and reviews.

          Pls provide your suggestion/feedback.

          Show
          ramkrishna.s.vasudevan added a comment - Patch that work with HFile V3 and makes Tags in memory to the KV. The attached design doc is about having Tags in the KV byte array. We had tried both the options and the attached patch is with inmemory Tags rather than appending to the KV byte array. There are few pros and cons with both the approaches. The support for PerfEval and LoadTestTool will attach in a seperate JIRA. Many thanks to Anoop for his quality ideas/contributions and review. Thanks to Andy for his inputs for the prototypes and reviews. Pls provide your suggestion/feedback.
          Hide
          ramkrishna.s.vasudevan added a comment -

          I can also attach the patch that adds tags with the kv byte buffer. Incase we need to discuss more on some of the design decisions taken. Just to add on there are some pros and cons with both the approaches.

          Show
          ramkrishna.s.vasudevan added a comment - I can also attach the patch that adds tags with the kv byte buffer. Incase we need to discuss more on some of the design decisions taken. Just to add on there are some pros and cons with both the approaches.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Attaching the comparison doc between inmemory tags and tag n the kv byte buffer.

          Show
          ramkrishna.s.vasudevan added a comment - Attaching the comparison doc between inmemory tags and tag n the kv byte buffer.
          Hide
          Ted Yu added a comment -

          Great effort, Ram.

          With the attached patch, I got the following:

          [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile (default-testCompile) on project hbase-server: Compilation failure: Compilation failure:
          [ERROR] /Users/tyu/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java:[346,79] VALUE_LENGTH has private access in org.apache.hadoop.hbase.PerformanceEvaluation
          [ERROR] /Users/tyu/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java:[346,36] cannot find symbol
          [ERROR] symbol  : method generateData(java.util.Random,int)
          [ERROR] location: class org.apache.hadoop.hbase.PerformanceEvaluation
          
          Show
          Ted Yu added a comment - Great effort, Ram. With the attached patch, I got the following: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile ( default -testCompile) on project hbase-server: Compilation failure: Compilation failure: [ERROR] /Users/tyu/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java:[346,79] VALUE_LENGTH has private access in org.apache.hadoop.hbase.PerformanceEvaluation [ERROR] /Users/tyu/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java:[346,36] cannot find symbol [ERROR] symbol : method generateData(java.util.Random, int ) [ERROR] location: class org.apache.hadoop.hbase.PerformanceEvaluation
          Hide
          Ted Yu added a comment -

          Browsed through HFileWriterV3 which needs class javadoc.
          Although there is code duplication with HFileWriterV2, I think it is fine for first cut.

          HFileReaderV3 needs license and class javadoc.

          +  protected static class EncodedScannerV3 extends EncodedScannerV2 {^M
          +    private DataBlockEncoder.EncodedSeeker seeker = null;^M
          +    private HFileReaderV3 reader;^M
          

          Why does EncodedScannerV3 need to keep reference to HFileReaderV3 ? AbstractHFileReader#Scanner#getReader() would return the reader, right ?

          Can you upload patch onto review board ? Please remove the trailing ^M.

          Show
          Ted Yu added a comment - Browsed through HFileWriterV3 which needs class javadoc. Although there is code duplication with HFileWriterV2, I think it is fine for first cut. HFileReaderV3 needs license and class javadoc. + protected static class EncodedScannerV3 extends EncodedScannerV2 {^M + private DataBlockEncoder.EncodedSeeker seeker = null ;^M + private HFileReaderV3 reader;^M Why does EncodedScannerV3 need to keep reference to HFileReaderV3 ? AbstractHFileReader#Scanner#getReader() would return the reader, right ? Can you upload patch onto review board ? Please remove the trailing ^M.
          Hide
          Anoop Sam John added a comment -

          Why does EncodedScannerV3 need to keep reference to HFileReaderV3 ?

          Using a field includesTag and to avoid the type casting, kept a reference this way

          dataBlockEncoder.createSeeker(reader.getComparator(),
          +          includesMemstoreTS, this.reader.includesTag);
          

          Please remove the trailing ^M.

          Sure. Sorry

          Although there is code duplication with HFileWriterV2, I think it is fine for first cut.

          Ya tried to avoid many. Still possible to avoid some more duplication with refactoring.

          Thanks Ted

          Show
          Anoop Sam John added a comment - Why does EncodedScannerV3 need to keep reference to HFileReaderV3 ? Using a field includesTag and to avoid the type casting, kept a reference this way dataBlockEncoder.createSeeker(reader.getComparator(), + includesMemstoreTS, this .reader.includesTag); Please remove the trailing ^M. Sure. Sorry Although there is code duplication with HFileWriterV2, I think it is fine for first cut. Ya tried to avoid many. Still possible to avoid some more duplication with refactoring. Thanks Ted
          Hide
          Ted Yu added a comment -

          It would be desirable to have performance comparison both in writing and reading between HFileV2 and HFileV3 where the contents of non-tagging are the same.

          Show
          Ted Yu added a comment - It would be desirable to have performance comparison both in writing and reading between HFileV2 and HFileV3 where the contents of non-tagging are the same.
          Hide
          Ted Yu added a comment -

          EncodedScannerV3 has reference to HFileReaderV3.
          All it needs is reader.includesTag and reader.getComparator()
          reader is accessible to EncodedScannerV2. We only need to store reader.includesTag, or cast reader to HFileReaderV3 and obtain includesTag at runtime.

          In Mutation.java:

          +  /*
          +   * Create a KeyValue with this objects row key and the Put identifier.
          +   *
          +   * @return a KeyValue with this objects row key and the Put identifier.
          +   */
          +  KeyValue createPutKeyValue(byte[] family, byte[] qualifier, long ts, byte[] value, Tag[] tag) {
          

          Can you add javadoc for tag parameter ? BTW calling the parameter tags would be better since an array of Tags may be passed.

          TagFilter.java needs license. In its filterKeyValue():

          +      if (t.getType() == (byte) 1) {
          

          Can you introduce a constant so that the meaning of tag type 1 can be easily understood ?

          In transform() method:

          +      } catch (CloneNotSupportedException e) {
          +      }
          

          Add debug log for the above case ?

          In CellUtil, I see copyTagTo() and copyTagToForPrefix() but they look the same. Did I miss something ?

          In createCell():

          +    // I need a Cell Factory here.  Using KeyValue for now. TODO.
          +    // TODO: Make a new Cell implementation that just carries these
          +    // byte arrays.
          

          The above would be done in a follow-on JIRA ?

          For tagsIterator():

          +      public void remove() {
          +        throw new IllegalStateException();
          

          Mind add a message to the exception ?

          Show
          Ted Yu added a comment - EncodedScannerV3 has reference to HFileReaderV3. All it needs is reader.includesTag and reader.getComparator() reader is accessible to EncodedScannerV2. We only need to store reader.includesTag, or cast reader to HFileReaderV3 and obtain includesTag at runtime. In Mutation.java: + /* + * Create a KeyValue with this objects row key and the Put identifier. + * + * @ return a KeyValue with this objects row key and the Put identifier. + */ + KeyValue createPutKeyValue( byte [] family, byte [] qualifier, long ts, byte [] value, Tag[] tag) { Can you add javadoc for tag parameter ? BTW calling the parameter tags would be better since an array of Tags may be passed. TagFilter.java needs license. In its filterKeyValue(): + if (t.getType() == ( byte ) 1) { Can you introduce a constant so that the meaning of tag type 1 can be easily understood ? In transform() method: + } catch (CloneNotSupportedException e) { + } Add debug log for the above case ? In CellUtil, I see copyTagTo() and copyTagToForPrefix() but they look the same. Did I miss something ? In createCell(): + // I need a Cell Factory here. Using KeyValue for now. TODO. + // TODO: Make a new Cell implementation that just carries these + // byte arrays. The above would be done in a follow-on JIRA ? For tagsIterator(): + public void remove() { + throw new IllegalStateException(); Mind add a message to the exception ?
          Hide
          Andrew Purtell added a comment -

          I posted the changes to hbase-common and hbase-server to RB on Ram's behalf here: https://reviews.apache.org/r/12981/ See comments there on testing done. Ram indicated he will shortly post to a subtask a separate patch with optional test-mostly changes to hbase-client and hbase-it. Those are not essential changes.

          Show
          Andrew Purtell added a comment - I posted the changes to hbase-common and hbase-server to RB on Ram's behalf here: https://reviews.apache.org/r/12981/ See comments there on testing done. Ram indicated he will shortly post to a subtask a separate patch with optional test-mostly changes to hbase-client and hbase-it. Those are not essential changes.
          Hide
          ramkrishna.s.vasudevan added a comment -

          The patch that works with KV Buffer, testcases are not yet completely running with this. But can be used to compare the changes between the two approaches.

          Show
          ramkrishna.s.vasudevan added a comment - The patch that works with KV Buffer, testcases are not yet completely running with this. But can be used to compare the changes between the two approaches.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Patch addressing Ted's comments.
          The createCell() comment is just copied from the existing createCell(). I have not yet updated this to review board as Andy has created the RB with git patch. Let me create one tomorrow.

          Show
          ramkrishna.s.vasudevan added a comment - Patch addressing Ted's comments. The createCell() comment is just copied from the existing createCell(). I have not yet updated this to review board as Andy has created the RB with git patch. Let me create one tomorrow.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12594548/HBASE-8496_2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 94 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 7 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          -1 release audit. The applied patch generated 1 release audit warnings (more than the trunk's current 0 warnings).

          -1 lineLengths. The patch introduces lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12594548/HBASE-8496_2.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 94 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 7 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. -1 release audit . The applied patch generated 1 release audit warnings (more than the trunk's current 0 warnings). -1 lineLengths . The patch introduces lines longer than 100 +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6497//console This message is automatically generated.
          Hide
          Ted Yu added a comment -

          In PrefixTreeBlockMeta.java, there is method setTagValueBytes().
          Is it used anywhere ?

          In PrefixTreeCodec.java :

          +      //TODO : Should i pass true  here for includeTags
          +      searcher = DecoderFactory.checkOut(block, true, true);
          

          Passing includesTag parameter to checkOut() method ?

          In TestRowData.java, I saw:

          -      all.add(new TestRowDataSimple());
          +      /*all.add(new TestRowDataSimple());
                 all.add(new TestRowDataDeeper());
          

          Did you encounter some issue that prevented the test to pass ?

          In PrefixTree tests, false is passed for includesTag:

          -    encoder = new PrefixTreeEncoder(os, includeMemstoreTS);
          +    encoder = new PrefixTreeEncoder(os, includeMemstoreTS, false);
          

          Do you want to add tag tests for PrefixTree in another issue ?

          Show
          Ted Yu added a comment - In PrefixTreeBlockMeta.java, there is method setTagValueBytes(). Is it used anywhere ? In PrefixTreeCodec.java : + //TODO : Should i pass true here for includeTags + searcher = DecoderFactory.checkOut(block, true , true ); Passing includesTag parameter to checkOut() method ? In TestRowData.java, I saw: - all.add( new TestRowDataSimple()); + /*all.add( new TestRowDataSimple()); all.add( new TestRowDataDeeper()); Did you encounter some issue that prevented the test to pass ? In PrefixTree tests, false is passed for includesTag: - encoder = new PrefixTreeEncoder(os, includeMemstoreTS); + encoder = new PrefixTreeEncoder(os, includeMemstoreTS, false ); Do you want to add tag tests for PrefixTree in another issue ?
          Hide
          Andrew Purtell added a comment -

          Making this critical because HBASE-6222 is blocked by it. RM: Please feel free to change this back if you feel otherwise.

          Show
          Andrew Purtell added a comment - Making this critical because HBASE-6222 is blocked by it. RM: Please feel free to change this back if you feel otherwise.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Updated design document. Patch to follow based on this.
          The optional part of writing tags could be done in a follow up JIRA.

          Show
          ramkrishna.s.vasudevan added a comment - Updated design document. Patch to follow based on this. The optional part of writing tags could be done in a follow up JIRA.
          Hide
          Ted Yu added a comment -

          Skimmed through latest design doc.

          Pls note that we would not be persisting any tag related information on the HFileBlock.

          Based on the Encoding/Decoding context state the Encoder and decoding logic of the algos would handle tags.

          Can you elaborate on the above a bit more ?

          Show
          Ted Yu added a comment - Skimmed through latest design doc. Pls note that we would not be persisting any tag related information on the HFileBlock. Based on the Encoding/Decoding context state the Encoder and decoding logic of the algos would handle tags. Can you elaborate on the above a bit more ?
          Hide
          ramkrishna.s.vasudevan added a comment -

          >>Pls note that we would not be persisting any tag related information on the HFileBlock.
          HFileBlock would not be modified in terms of what is persisted. In the recent patch that we have worked up on we are just enclosing some of the state variables into a POJO.
          So specifically for tags there are no modifications on HfileBlock. So there is no need for subclassing HFileBlock or add a new version to the hfileblock.
          >>Based on the Encoding/Decoding context state the Encoder and decoding logic of the algos would handle tags.
          In the previous version of the patches the includeMemstoreTS and the includeTags created changes to the interface apis. So as to avoid this we enclosed these things inside the Context. Iin the recent patches the context would have the POJO mentioned above).
          So from the context we would be able to get the various decision points like to include tags or not, includememstoreTS or not.
          The encoding apis carry the context but the actual seekers that does the actual decoding does not use contexts. So we have made changes such that the encoding logic and the seekers use the context related objects.

          Show
          ramkrishna.s.vasudevan added a comment - >>Pls note that we would not be persisting any tag related information on the HFileBlock. HFileBlock would not be modified in terms of what is persisted. In the recent patch that we have worked up on we are just enclosing some of the state variables into a POJO. So specifically for tags there are no modifications on HfileBlock. So there is no need for subclassing HFileBlock or add a new version to the hfileblock. >>Based on the Encoding/Decoding context state the Encoder and decoding logic of the algos would handle tags. In the previous version of the patches the includeMemstoreTS and the includeTags created changes to the interface apis. So as to avoid this we enclosed these things inside the Context. Iin the recent patches the context would have the POJO mentioned above). So from the context we would be able to get the various decision points like to include tags or not, includememstoreTS or not. The encoding apis carry the context but the actual seekers that does the actual decoding does not use contexts. So we have made changes such that the encoding logic and the seekers use the context related objects.
          Hide
          stack added a comment -

          On the design doc:

          + A whole byte to keep the type when the number of types will be small seems profligate in our base type?
          + Two bytes of length ditto.

          You could save 50% putting type and length together in a short?

          Or is the tag length, the overall tags length?

          What else changed in the design doc?

          Show
          stack added a comment - On the design doc: + A whole byte to keep the type when the number of types will be small seems profligate in our base type? + Two bytes of length ditto. You could save 50% putting type and length together in a short? Or is the tag length, the overall tags length? What else changed in the design doc?
          Hide
          ramkrishna.s.vasudevan added a comment -

          Or is the tag length, the overall tags length?

          That is for individual tag length. The overall tag length is short still.
          And then we have a byte indicating the tag type.

          What else changed in the design doc?

          It now adds how we will be using V3 and how it is implemented to avoid more code changes. It also covers changes done to the existing Encoding/Decoding contexts to implement tags in the DBEs.

          Show
          ramkrishna.s.vasudevan added a comment - Or is the tag length, the overall tags length? That is for individual tag length. The overall tag length is short still. And then we have a byte indicating the tag type. What else changed in the design doc? It now adds how we will be using V3 and how it is implemented to avoid more code changes. It also covers changes done to the existing Encoding/Decoding contexts to implement tags in the DBEs.
          Hide
          ramkrishna.s.vasudevan added a comment -
          Show
          ramkrishna.s.vasudevan added a comment - Posted a new RB https://reviews.apache.org/r/13311/ .
          Hide
          ramkrishna.s.vasudevan added a comment -

          I have updated the patch in RB. Pls share your comments/feedback.

          Show
          ramkrishna.s.vasudevan added a comment - I have updated the patch in RB. Pls share your comments/feedback.
          Hide
          Andrew Purtell added a comment -

          +1 on latest patch.

          Show
          Andrew Purtell added a comment - +1 on latest patch.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Latest patch updated with the current code base in RB (after loadtesttool changes, Datatype changes etc.)
          All testcases passes.
          Tested in cluster with different combinations of Encoding including NONE.
          Tested with and without compression.
          Tested cluster restart scenarios also and tested WAL replay with Tags.

          Show
          ramkrishna.s.vasudevan added a comment - Latest patch updated with the current code base in RB (after loadtesttool changes, Datatype changes etc.) All testcases passes. Tested in cluster with different combinations of Encoding including NONE. Tested with and without compression. Tested cluster restart scenarios also and tested WAL replay with Tags.
          Hide
          Andrew Purtell added a comment -

          +1 on latest patch

          Show
          Andrew Purtell added a comment - +1 on latest patch
          Hide
          ramkrishna.s.vasudevan added a comment -

          Latest patch available in RB. Addresses the review comments on RB.
          It would be great if reviews are done and we could get this in 0.95.2.
          Stack,Elliott Clark
          What you guys think?

          Show
          ramkrishna.s.vasudevan added a comment - Latest patch available in RB. Addresses the review comments on RB. It would be great if reviews are done and we could get this in 0.95.2. Stack , Elliott Clark What you guys think?
          Hide
          ramkrishna.s.vasudevan added a comment -

          Once Stack completes his review will update the patch. Already updated the patch after Jon's change went in. Final updated patch will update after Stack is done with his review.
          Thanks Stack.

          Show
          ramkrishna.s.vasudevan added a comment - Once Stack completes his review will update the patch. Already updated the patch after Jon's change went in. Final updated patch will update after Stack is done with his review. Thanks Stack.
          Hide
          stack added a comment -

          Lads: I made it about half way through the patch up on rb. I was mostly skimming. High-level, it has come a long ways. It is getting close. It needs some more review cycles I'd say before it can go in but it viable now (where I had my doubts previously). As RM for 0.96 I am making the call that this will not make the 0.96 cut – it is too late – but keep going and get this into trunk and work on getting 0.98 out just after 0.96. I can help out w/ reviews in a few days after I have 0.95.2 tied off and the first 0.96RC is up; pester others for reviews in meantime. Good work lads (ramkrishna.s.vasudevan and anoop)

          Show
          stack added a comment - Lads: I made it about half way through the patch up on rb. I was mostly skimming. High-level, it has come a long ways. It is getting close. It needs some more review cycles I'd say before it can go in but it viable now (where I had my doubts previously). As RM for 0.96 I am making the call that this will not make the 0.96 cut – it is too late – but keep going and get this into trunk and work on getting 0.98 out just after 0.96. I can help out w/ reviews in a few days after I have 0.95.2 tied off and the first 0.96RC is up; pester others for reviews in meantime. Good work lads ( ramkrishna.s.vasudevan and anoop )
          Hide
          ramkrishna.s.vasudevan added a comment -

          An update on this
          Since this feature has moved to 0.98
          -> We have added the HFileContext changes
          -> Done with compression of tags on WAL and HFiles using dictionary (yet to test)
          -> Making tags optional too.
          Will update patch on RB after some more testing.

          Show
          ramkrishna.s.vasudevan added a comment - An update on this Since this feature has moved to 0.98 -> We have added the HFileContext changes -> Done with compression of tags on WAL and HFiles using dictionary (yet to test) -> Making tags optional too. Will update patch on RB after some more testing.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Updated the RB Posted a new RB https://reviews.apache.org/r/13311/.
          This changes has
          Tags with V3, HFileContext changes and also makes tags optional in V3.
          All testcases passes. Ran the PE and LoadTestTool in a single machine and a cluster with 4 nodes.
          Ensured that HFiles with Version 2 can be read back with Verions 3 by switching versions. Request you to provide feedback/reviews so that we can take this into 0.98.

          Show
          ramkrishna.s.vasudevan added a comment - Updated the RB Posted a new RB https://reviews.apache.org/r/13311/ . This changes has Tags with V3, HFileContext changes and also makes tags optional in V3. All testcases passes. Ran the PE and LoadTestTool in a single machine and a cluster with 4 nodes. Ensured that HFiles with Version 2 can be read back with Verions 3 by switching versions. Request you to provide feedback/reviews so that we can take this into 0.98.
          Hide
          ramkrishna.s.vasudevan added a comment -

          @Ted
          Thanks for the reviews. Once your review is done will update the patch.

          Show
          ramkrishna.s.vasudevan added a comment - @Ted Thanks for the reviews. Once your review is done will update the patch.
          Hide
          Ted Yu added a comment -

          I am currently on page 5.
          Since HFileContext is used in so many places, it would be nice if HFileContext uses the Builder pattern.

          Show
          Ted Yu added a comment - I am currently on page 5. Since HFileContext is used in so many places, it would be nice if HFileContext uses the Builder pattern.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Addressed few of review comments. Few more are remaining. Will update a patch shortly.
          Regarding the Builder pattern, you mind moving it as an improvement?

          Show
          ramkrishna.s.vasudevan added a comment - Addressed few of review comments. Few more are remaining. Will update a patch shortly. Regarding the Builder pattern, you mind moving it as an improvement?
          Hide
          ramkrishna.s.vasudevan added a comment -

          @Ted
          Would update the patch soon.

          I would like to get this patch committed soon in trunk. Any reviews/comments are welcome.

          Show
          ramkrishna.s.vasudevan added a comment - @Ted Would update the patch soon. I would like to get this patch committed soon in trunk. Any reviews/comments are welcome.
          Hide
          Ted Yu added a comment -

          Regarding the Builder pattern, you mind moving it as an improvement?

          I created a sub-task for the above. I am fine with implementing Builder pattern later.

          Show
          Ted Yu added a comment - Regarding the Builder pattern, you mind moving it as an improvement? I created a sub-task for the above. I am fine with implementing Builder pattern later.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Latest patch for tags that addresses the review comments.
          Tested with 4 nodes (RS and DN) + 1 node (Master and NN). Ran for about 20 hours without any issues. Tested various scenarios with and without compression, with and without encoding, reading files with Version V2 and upgrading to V3. Will attach performance results also.

          Show
          ramkrishna.s.vasudevan added a comment - Latest patch for tags that addresses the review comments. Tested with 4 nodes (RS and DN) + 1 node (Master and NN). Ran for about 20 hours without any issues. Tested various scenarios with and without compression, with and without encoding, reading files with Version V2 and upgrading to V3. Will attach performance results also.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Performance report attached. Shows there is no performance regression with the latest changes.

          Show
          ramkrishna.s.vasudevan added a comment - Performance report attached. Shows there is no performance regression with the latest changes.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12604019/Performance_report.xlsx
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7300//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12604019/Performance_report.xlsx against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7300//console This message is automatically generated.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Reattaching for hadoopQA.

          Show
          ramkrishna.s.vasudevan added a comment - Reattaching for hadoopQA.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Please prvoide your reviews/comments. Plan to commit this in the next 48hours into trunk/0.98 if there are no objections.

          Show
          ramkrishna.s.vasudevan added a comment - Please prvoide your reviews/comments. Plan to commit this in the next 48hours into trunk/0.98 if there are no objections.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12604194/HBASE-8496_3.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 185 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7315//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12604194/HBASE-8496_3.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 185 new or modified tests. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7315//console This message is automatically generated.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Updated patch to make hadoopQA run.

          Show
          ramkrishna.s.vasudevan added a comment - Updated patch to make hadoopQA run.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12604214/HBASE-8496_4.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 185 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 18 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 8 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12604214/HBASE-8496_4.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 185 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 18 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 8 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7318//console This message is automatically generated.
          Hide
          Andrew Purtell added a comment -

          +1 on latest patch

          I filed two subtasks for follow ups along with the builder pattern request from Ted.

          Show
          Andrew Purtell added a comment - +1 on latest patch I filed two subtasks for follow ups along with the builder pattern request from Ted.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Patch corrects all findbugs and javadoc warnings. Hope the count becomes 0 now.
          All testcases passed except
          testAll(org.apache.hadoop.hbase.thrift.TestThriftServer. Seems unrelated. Submitting for hadoopQA.

          Show
          ramkrishna.s.vasudevan added a comment - Patch corrects all findbugs and javadoc warnings. Hope the count becomes 0 now. All testcases passed except testAll(org.apache.hadoop.hbase.thrift.TestThriftServer. Seems unrelated. Submitting for hadoopQA.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12604374/HBASE-8496_5.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 185 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12604374/HBASE-8496_5.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 185 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7328//console This message is automatically generated.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Updated patch, corrects that single findbug warning.
          Plan to commit this later in the day unless objections.

          Show
          ramkrishna.s.vasudevan added a comment - Updated patch, corrects that single findbug warning. Plan to commit this later in the day unless objections.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12604382/HBASE-8496_6.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 185 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12604382/HBASE-8496_6.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 185 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7330//console This message is automatically generated.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Committed the patch HBASE-8496_6.patch to trunk. For reference the performance report has also been attached with this.
          My sincere thanks to Anoop who was pairing up and helped in getting the design, coding to be done. Thanks to his ideas on HFileContext and his patient code reviews.
          Thanks to Andy for making the design review and helping in various testings.

          Thanks to Stack, Ted, Matt and Jon for their reviews.

          Show
          ramkrishna.s.vasudevan added a comment - Committed the patch HBASE-8496 _6.patch to trunk. For reference the performance report has also been attached with this. My sincere thanks to Anoop who was pairing up and helped in getting the design, coding to be done. Thanks to his ideas on HFileContext and his patient code reviews. Thanks to Andy for making the design review and helping in various testings. Thanks to Stack, Ted, Matt and Jon for their reviews.
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK #4544 (See https://builds.apache.org/job/HBase-TRUNK/4544/)
          HBASE-8496 - Implement tags and the internals of how a tag should look like (Ram) (ramkrishna: rev 1525269)

          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java
          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java
          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueTestUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/Tag.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDecodingContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockEncodingContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumType.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/test/RedundantKVGenerator.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodec.java
          • /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngest.java
          • /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithTags.java
          • /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestLazyCfLoading.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeCodec.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/DecoderFactory.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArrayReversibleScanner.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArrayScanner.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArraySearcher.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeCell.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/column/ColumnNodeReader.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/column/ColumnReader.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/row/RowNodeReader.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/PrefixTreeEncoder.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/column/ColumnNodeWriter.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/column/ColumnSectionWriter.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/other/ColumnNodeType.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/row/RowNodeWriter.java
          • /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/keyvalue/TestKeyValueTool.java
          • /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/column/TestColumnBuilder.java
          • /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowData.java
          • /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowEncoder.java
          • /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/data/TestRowDataRandomKeyValuesWithTags.java
          • /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/data/TestRowDataTrivialWithTags.java
          • /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/CellProtos.java
          • /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
          • /hbase/trunk/hbase-protocol/src/main/protobuf/Cell.proto
          • /hbase/trunk/hbase-protocol/src/main/protobuf/Client.proto
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFlusher.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogPrettyPrinter.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/KeyValueCompression.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ChecksumType.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TagUsage.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestKeyValueCompression.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALCellCodecWithCompression.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedAction.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedReader.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriter.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriterBase.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/RestartMetaTest.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadParallel.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java
          • /hbase/trunk/hbase-server/src/test/resources/mapred-site.xml
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK #4544 (See https://builds.apache.org/job/HBase-TRUNK/4544/ ) HBASE-8496 - Implement tags and the internals of how a tag should look like (Ram) (ramkrishna: rev 1525269) /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueTestUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/Tag.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDecodingContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockEncodingContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumType.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/test/RedundantKVGenerator.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodec.java /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngest.java /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithTags.java /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestLazyCfLoading.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeCodec.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/DecoderFactory.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArrayReversibleScanner.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArrayScanner.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArraySearcher.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeCell.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/column/ColumnNodeReader.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/column/ColumnReader.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/row/RowNodeReader.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/PrefixTreeEncoder.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/column/ColumnNodeWriter.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/column/ColumnSectionWriter.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/other/ColumnNodeType.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/row/RowNodeWriter.java /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/keyvalue/TestKeyValueTool.java /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/column/TestColumnBuilder.java /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowData.java /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowEncoder.java /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/data/TestRowDataRandomKeyValuesWithTags.java /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/data/TestRowDataTrivialWithTags.java /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/CellProtos.java /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java /hbase/trunk/hbase-protocol/src/main/protobuf/Cell.proto /hbase/trunk/hbase-protocol/src/main/protobuf/Client.proto /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFlusher.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogPrettyPrinter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/KeyValueCompression.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ChecksumType.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TagUsage.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestKeyValueCompression.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALCellCodecWithCompression.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedAction.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedReader.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriter.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriterBase.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/RestartMetaTest.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadParallel.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java /hbase/trunk/hbase-server/src/test/resources/mapred-site.xml
          Hide
          stack added a comment -

          Pity. I wanted to review before it went in. Any chance of this stuff going into a branch first before it goes on branch? My fault I did not review before this and don't want to hold up trunk but might be good to get more eyes on it; there is opportunity here for opening up the hfile/filescanner apis. Thanks.

          Show
          stack added a comment - Pity. I wanted to review before it went in. Any chance of this stuff going into a branch first before it goes on branch? My fault I did not review before this and don't want to hold up trunk but might be good to get more eyes on it; there is opportunity here for opening up the hfile/filescanner apis. Thanks.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #751 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/751/)
          HBASE-8496 - Implement tags and the internals of how a tag should look like (Ram) (ramkrishna: rev 1525269)

          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java
          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java
          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueTestUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/Tag.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDecodingContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockEncodingContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumType.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/test/RedundantKVGenerator.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodec.java
          • /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngest.java
          • /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithTags.java
          • /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestLazyCfLoading.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeCodec.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/DecoderFactory.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArrayReversibleScanner.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArrayScanner.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArraySearcher.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeCell.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/column/ColumnNodeReader.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/column/ColumnReader.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/row/RowNodeReader.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/PrefixTreeEncoder.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/column/ColumnNodeWriter.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/column/ColumnSectionWriter.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/other/ColumnNodeType.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/row/RowNodeWriter.java
          • /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/keyvalue/TestKeyValueTool.java
          • /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/column/TestColumnBuilder.java
          • /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowData.java
          • /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowEncoder.java
          • /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/data/TestRowDataRandomKeyValuesWithTags.java
          • /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/data/TestRowDataTrivialWithTags.java
          • /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/CellProtos.java
          • /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
          • /hbase/trunk/hbase-protocol/src/main/protobuf/Cell.proto
          • /hbase/trunk/hbase-protocol/src/main/protobuf/Client.proto
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFlusher.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogPrettyPrinter.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/KeyValueCompression.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ChecksumType.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TagUsage.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestKeyValueCompression.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALCellCodecWithCompression.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedAction.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedReader.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriter.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriterBase.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/RestartMetaTest.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadParallel.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java
          • /hbase/trunk/hbase-server/src/test/resources/mapred-site.xml
          Show
          Hudson added a comment - FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #751 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/751/ ) HBASE-8496 - Implement tags and the internals of how a tag should look like (Ram) (ramkrishna: rev 1525269) /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueTestUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/Tag.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDecodingContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockEncodingContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumType.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/test/RedundantKVGenerator.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodec.java /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngest.java /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithTags.java /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestLazyCfLoading.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeCodec.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/DecoderFactory.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArrayReversibleScanner.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArrayScanner.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArraySearcher.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeCell.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/column/ColumnNodeReader.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/column/ColumnReader.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/row/RowNodeReader.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/PrefixTreeEncoder.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/column/ColumnNodeWriter.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/column/ColumnSectionWriter.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/other/ColumnNodeType.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/row/RowNodeWriter.java /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/keyvalue/TestKeyValueTool.java /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/column/TestColumnBuilder.java /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowData.java /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowEncoder.java /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/data/TestRowDataRandomKeyValuesWithTags.java /hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/data/TestRowDataTrivialWithTags.java /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/CellProtos.java /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java /hbase/trunk/hbase-protocol/src/main/protobuf/Cell.proto /hbase/trunk/hbase-protocol/src/main/protobuf/Client.proto /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFlusher.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogPrettyPrinter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/KeyValueCompression.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ChecksumType.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TagUsage.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestKeyValueCompression.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALCellCodecWithCompression.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedAction.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedReader.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriter.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriterBase.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/RestartMetaTest.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadParallel.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java /hbase/trunk/hbase-server/src/test/resources/mapred-site.xml
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK #4549 (See https://builds.apache.org/job/HBase-TRUNK/4549/)
          HBASE-8496-Implement tags and the internals of how a tag should look like - Addendum to remove ChecksumFactory.java (Ram) (ramkrishna: rev 1525504)

          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK #4549 (See https://builds.apache.org/job/HBase-TRUNK/4549/ ) HBASE-8496 -Implement tags and the internals of how a tag should look like - Addendum to remove ChecksumFactory.java (Ram) (ramkrishna: rev 1525504) /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #756 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/756/)
          HBASE-8496-Implement tags and the internals of how a tag should look like - Addendum to remove ChecksumFactory.java (Ram) (ramkrishna: rev 1525504)

          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #756 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/756/ ) HBASE-8496 -Implement tags and the internals of how a tag should look like - Addendum to remove ChecksumFactory.java (Ram) (ramkrishna: rev 1525504) /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java
          Hide
          stack added a comment -

          Shouldn't this issue have a release note? I marked it as reviewed and an incompatible change (is that right?). What should we add to the refguide on tags? Or, we are not ready to add anything there just yet?

          On the updated design doc:

          There is nothing in the way of altering how tags are currently added, right? It is just done this way for expediency given so much of the core is still up on KV.

          The below should be 0 or more... right?

          Every KV can have 1 or more tags.

          .... hmm... nevermind the rest of the comments. It looks like this design doc. is a long way from what was implemented. np. We just need a bit of a write up on what went in.... before 0.98. Can ignore the below.

          On slide #3, every tag has a type byte preceeding it? On slide #3 you don't say what a tag is? Just a run of bytes?

          Oh, so looks like the implementation has deviated from the design, right? OK. Is it written up in short form anywhere? What was implemented?

          The tag structure on slide #5 is different from what is on #3. On #5 it talks of a tagarray (am I being too literal)?

          Are there big changes between hfilev2 and hfilev3? They seem small going by this design doc.

          Show
          stack added a comment - Shouldn't this issue have a release note? I marked it as reviewed and an incompatible change (is that right?). What should we add to the refguide on tags? Or, we are not ready to add anything there just yet? On the updated design doc: There is nothing in the way of altering how tags are currently added, right? It is just done this way for expediency given so much of the core is still up on KV. The below should be 0 or more... right? Every KV can have 1 or more tags. .... hmm... nevermind the rest of the comments. It looks like this design doc. is a long way from what was implemented. np. We just need a bit of a write up on what went in.... before 0.98. Can ignore the below. On slide #3, every tag has a type byte preceeding it? On slide #3 you don't say what a tag is? Just a run of bytes? Oh, so looks like the implementation has deviated from the design, right? OK. Is it written up in short form anywhere? What was implemented? The tag structure on slide #5 is different from what is on #3. On #5 it talks of a tagarray (am I being too literal)? Are there big changes between hfilev2 and hfilev3? They seem small going by this design doc.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Shouldn't this issue have a release note? I marked it as reviewed and an incompatible change (is that right?). What should we add to the refguide on tags? Or, we are not ready to add anything there just yet?

          I can add a release note and reference guide on tags.

          The below should be 0 or more... right?

          Intention was to say tags can be 1 or more in the sense we support more than 1. Ya per KV it is 0 or more tags.

          It looks like this design doc. is a long way from what was implemented

          The implementation is not much deviated as per the design. But the thing is we added terms like HFileContext just to have a POJO for all the hfile file related attributes. Mainly that avoids the problem that was once pointed out in the older review requests where inorder to include/exclude tags we were passing booleans in method parameters like how includeMemstoreTS is passed through out the code flow.

          On slide #3 you don't say what a tag is? Just a run of bytes?

          I can add them.

          The tag structure on slide #5 is different from what is on #3. On #5 it talks of a tagarray (am I being too literal)?

          The slide on #3 describes how per KV the tag looks like. As every KV can have one or more tags we are having a format that gives the length, type and the tag bytes.
          When we write it in the HFile we need to write the total tag length and all the tags that is available. I cannot say that as a different format, it is the way it is persisted in HFile.

          Are there big changes between hfilev2 and hfilev3?

          Agree that the change is not much. But the fact that changing the exising HFileV2 was looked upon as a risky area for accomodating tags and hence decided to go with V3.

          Show
          ramkrishna.s.vasudevan added a comment - Shouldn't this issue have a release note? I marked it as reviewed and an incompatible change (is that right?). What should we add to the refguide on tags? Or, we are not ready to add anything there just yet? I can add a release note and reference guide on tags. The below should be 0 or more... right? Intention was to say tags can be 1 or more in the sense we support more than 1. Ya per KV it is 0 or more tags. It looks like this design doc. is a long way from what was implemented The implementation is not much deviated as per the design. But the thing is we added terms like HFileContext just to have a POJO for all the hfile file related attributes. Mainly that avoids the problem that was once pointed out in the older review requests where inorder to include/exclude tags we were passing booleans in method parameters like how includeMemstoreTS is passed through out the code flow. On slide #3 you don't say what a tag is? Just a run of bytes? I can add them. The tag structure on slide #5 is different from what is on #3. On #5 it talks of a tagarray (am I being too literal)? The slide on #3 describes how per KV the tag looks like. As every KV can have one or more tags we are having a format that gives the length, type and the tag bytes. When we write it in the HFile we need to write the total tag length and all the tags that is available. I cannot say that as a different format, it is the way it is persisted in HFile. Are there big changes between hfilev2 and hfilev3? Agree that the change is not much. But the fact that changing the exising HFileV2 was looked upon as a risky area for accomodating tags and hence decided to go with V3.
          Hide
          stack added a comment -

          All is good ramkrishna.s.vasudevan Would suggest not spending any more time on the design doc. Release note would be good. Thanks.

          Show
          stack added a comment - All is good ramkrishna.s.vasudevan Would suggest not spending any more time on the design doc. Release note would be good. Thanks.
          Hide
          stack added a comment -

          ramkrishna.s.vasudevan Why do we not wholesale move to hfilev3? Why do we have to enable the config? Why not write all as hfilev3 going forward? Thanks.

          Show
          stack added a comment - ramkrishna.s.vasudevan Why do we not wholesale move to hfilev3? Why do we have to enable the config? Why not write all as hfilev3 going forward? Thanks.
          Hide
          Anoop Sam John added a comment -

          Initially we didnt have ways to make optional tags. Now also if V3 is enabled and writes are with out any tags, during flush we will write extra 2 bytes of tag length (0) with every KV. But later during the compaction this also will get removed. Being available this it might be better to make the default version to 3 so that no need to change explicitely to use tags. I am +1 for that now.

          Show
          Anoop Sam John added a comment - Initially we didnt have ways to make optional tags. Now also if V3 is enabled and writes are with out any tags, during flush we will write extra 2 bytes of tag length (0) with every KV. But later during the compaction this also will get removed. Being available this it might be better to make the default version to 3 so that no need to change explicitely to use tags. I am +1 for that now.
          Hide
          ramkrishna.s.vasudevan added a comment -

          If that additional 2 bytes in flush is ok we can make the version to v3. +1

          Show
          ramkrishna.s.vasudevan added a comment - If that additional 2 bytes in flush is ok we can make the version to v3. +1
          Hide
          Anoop Sam John added a comment -

          Raised HBASE-9801 for this.

          Show
          Anoop Sam John added a comment - Raised HBASE-9801 for this.
          Hide
          Andrew Purtell added a comment -

          But the fact that changing the exising HFileV2 was looked upon as a risky area for accomodating tags and hence decided to go with V3.

          I think it makes sense to tag v3 experimental for the 0.98 cycle so it is a safe place for collecting new features, to iron them out.

          Tags may not be the only difference with v3. HBASE-7544 proposes to add encryption.

          Show
          Andrew Purtell added a comment - But the fact that changing the exising HFileV2 was looked upon as a risky area for accomodating tags and hence decided to go with V3. I think it makes sense to tag v3 experimental for the 0.98 cycle so it is a safe place for collecting new features, to iron them out. Tags may not be the only difference with v3. HBASE-7544 proposes to add encryption.
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #835 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/835/)
          HBASE-9816-Address review comments in HBASE-8496 (Ram) (ramkrishna: rev 1540785)

          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/Tag.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeCodec.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/other/ColumnNodeType.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #835 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/835/ ) HBASE-9816 -Address review comments in HBASE-8496 (Ram) (ramkrishna: rev 1540785) /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/Tag.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeCodec.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/other/ColumnNodeType.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK #4678 (See https://builds.apache.org/job/HBase-TRUNK/4678/)
          HBASE-9816-Address review comments in HBASE-8496 (Ram) (ramkrishna: rev 1540785)

          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/Tag.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeCodec.java
          • /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/other/ColumnNodeType.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK #4678 (See https://builds.apache.org/job/HBase-TRUNK/4678/ ) HBASE-9816 -Address review comments in HBASE-8496 (Ram) (ramkrishna: rev 1540785) /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/Tag.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeCodec.java /hbase/trunk/hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/other/ColumnNodeType.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java

            People

            • Assignee:
              ramkrishna.s.vasudevan
              Reporter:
              ramkrishna.s.vasudevan
            • Votes:
              0 Vote for this issue
              Watchers:
              24 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development