HBase
  1. HBase
  2. HBASE-7902

deletes may be removed during minor compaction, in non-standard compaction schemes [rename enums]

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.98.0, 0.95.2
    • Component/s: Compaction
    • Labels:
      None
    • Release Note:
      committed to 0.95 and trunk last week

      Description

      Deletes are only removed during major compaction now. However, in presence of file ordering, deletes can be removed during minor compaction too, as long as there's no file that is not being compacted that is older than the files that are.

      1. HBASE-7902-v0.patch
        20 kB
        Sergey Shelukhin
      2. HBASE-7902-v0-with-7843.patch
        107 kB
        Sergey Shelukhin
      3. HBASE-7902-v1.patch
        10 kB
        Sergey Shelukhin
      4. HBASE-7902-v1-.patch
        10 kB
        Sergey Shelukhin

        Issue Links

          Activity

          Hide
          Sergey Shelukhin added a comment -

          Patch is be based on HBASE-7843, as the mechanism to propagate the flag will have to be different before and after that, no sense to redo it twice. Attaching the actual patch and the combined patch Apache will be able to apply.

          The patch changes ScanType semantics and names for compaction to be more precise, and changes policy to set it based on whether we go from the oldest file. Piggybacked on existing test to verify.

          Show
          Sergey Shelukhin added a comment - Patch is be based on HBASE-7843 , as the mechanism to propagate the flag will have to be different before and after that, no sense to redo it twice. Attaching the actual patch and the combined patch Apache will be able to apply. The patch changes ScanType semantics and names for compaction to be more precise, and changes policy to set it based on whether we go from the oldest file. Piggybacked on existing test to verify.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12570550/HBASE-7902-v0-with-7843.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 28 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.constraint.TestConstraint
          org.apache.hadoop.hbase.security.access.TestAccessController

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12570550/HBASE-7902-v0-with-7843.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 28 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.constraint.TestConstraint org.apache.hadoop.hbase.security.access.TestAccessController Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4503//console This message is automatically generated.
          Hide
          Sergey Shelukhin added a comment -

          TestConstraint fails due to a known issue (I think JIRA is about to be created), TestAccessController may be due to the same issue but it passes locally.

          Show
          Sergey Shelukhin added a comment - TestConstraint fails due to a known issue (I think JIRA is about to be created), TestAccessController may be due to the same issue but it passes locally.
          Hide
          Sergey Shelukhin added a comment -

          TC failed due to HBASE-7933

          Show
          Sergey Shelukhin added a comment - TC failed due to HBASE-7933
          Hide
          Elliott Clark added a comment -

          Unless I'm missing something I'm not sure that holds if the timestamps are put out of order.

          Hfile 1:
          Put(ts = 2)

          Hfile 2:
          Put(ts = 4)
          Delete(ts = 2)
          Delete(ts = 1)

          Hfile 3:
          Put(ts = 1)

          If you now compact File 1 and File 2 and remove the delete then put in file 3 will be visible.

          Show
          Elliott Clark added a comment - Unless I'm missing something I'm not sure that holds if the timestamps are put out of order. Hfile 1: Put(ts = 2) Hfile 2: Put(ts = 4) Delete(ts = 2) Delete(ts = 1) Hfile 3: Put(ts = 1) If you now compact File 1 and File 2 and remove the delete then put in file 3 will be visible.
          Hide
          Sergey Shelukhin added a comment -

          hmm, that is true. Well, I will repurpose this issue, looks like in default policy it cannot be done without additional metadata in a yet more limited set of circumstances (i.e. assuming most people don't use explicit timestamps we could store min-ts/max-ts and get benefit in many cases).

          Show
          Sergey Shelukhin added a comment - hmm, that is true. Well, I will repurpose this issue, looks like in default policy it cannot be done without additional metadata in a yet more limited set of circumstances (i.e. assuming most people don't use explicit timestamps we could store min-ts/max-ts and get benefit in many cases).
          Hide
          Sergey Shelukhin added a comment -

          Hmm. I am starting to wonder about the semantics of this.
          Compactions (including major) trigger at arbitrary time.
          What if we have this put ts = 1 in memstore while we do the major compaction? What if we write it 1 minute after delete ts = 2, and compaction already happened?

          Show
          Sergey Shelukhin added a comment - Hmm. I am starting to wonder about the semantics of this. Compactions (including major) trigger at arbitrary time. What if we have this put ts = 1 in memstore while we do the major compaction? What if we write it 1 minute after delete ts = 2, and compaction already happened?
          Hide
          Sergey Shelukhin added a comment -

          It seems like this inconsistency should be already there. We probably don't want to make it more prevalent by default unless we invent proper semantics that don't depend on timing (or they are already there and one of us or both are incorrect ); but this will have bearing on schemes like stripe and especially level, where there's no major as such, and dropping deletes in presence of L0 becomes extremely painful (for stripe; compact stripe + L0 and drop only deletes from the range of the stripe) or near impossible (classical level)

          Show
          Sergey Shelukhin added a comment - It seems like this inconsistency should be already there. We probably don't want to make it more prevalent by default unless we invent proper semantics that don't depend on timing (or they are already there and one of us or both are incorrect ); but this will have bearing on schemes like stripe and especially level, where there's no major as such, and dropping deletes in presence of L0 becomes extremely painful (for stripe; compact stripe + L0 and drop only deletes from the range of the stripe) or near impossible (classical level)
          Hide
          Lars Hofhansl added a comment -

          The current logic ensures that delete markers are if (and only if) they no longer effect any KVs at the time when the major compaction runs.

          I do not think we can optimize this along the timestamp dimension (there are other ways of course, such as your proposed striped compactions, where we partition along the key dimemsion).

          The scenario you describe is one that confuses many HBase users. If timestamps are dated into the past or the future one better knows what one is doing.

          Show
          Lars Hofhansl added a comment - The current logic ensures that delete markers are if (and only if) they no longer effect any KVs at the time when the major compaction runs. I do not think we can optimize this along the timestamp dimension (there are other ways of course, such as your proposed striped compactions, where we partition along the key dimemsion). The scenario you describe is one that confuses many HBase users. If timestamps are dated into the past or the future one better knows what one is doing.
          Hide
          Sergey Shelukhin added a comment -

          delete markers are if (and only if) they no longer effect any KVs at the time when the major compaction runs.

          What about markers in memstore?

          Show
          Sergey Shelukhin added a comment - delete markers are if (and only if) they no longer effect any KVs at the time when the major compaction runs. What about markers in memstore?
          Hide
          Lars Hofhansl added a comment -

          (we had comment overlap above... Took a bit to type that comment ).

          The memstore can have new or old data (back or forward dated), so it always needs to be considered (if that is what you meant).

          I think I mentioned on the strip-compaction-jira... Every compaction that wants to remove delete markers has to consider the memstore and L0. Now it does seem that such compacation needs to consider all levels.
          In the face of application defined timestamps that is indeed a bit tricky (and one area where we differ from LevelDB).

          In the striped case each stripe is individually leveled, right? So we still get the benefit that a "major compaction" only has to consider this stripe (plus memstore and all L0) even if that includes all levels.

          Show
          Lars Hofhansl added a comment - (we had comment overlap above... Took a bit to type that comment ). The memstore can have new or old data (back or forward dated), so it always needs to be considered (if that is what you meant). I think I mentioned on the strip-compaction-jira... Every compaction that wants to remove delete markers has to consider the memstore and L0. Now it does seem that such compacation needs to consider all levels. In the face of application defined timestamps that is indeed a bit tricky (and one area where we differ from LevelDB). In the striped case each stripe is individually leveled, right? So we still get the benefit that a "major compaction" only has to consider this stripe (plus memstore and all L0) even if that includes all levels.
          Hide
          Sergey Shelukhin added a comment -

          The current major compaction doesn't consider memstore though, as far as I see.

          Show
          Sergey Shelukhin added a comment - The current major compaction doesn't consider memstore though, as far as I see.
          Hide
          Sergey Shelukhin added a comment -

          Your comment with regard to stripe appears to be correct; if we keep the same level of consistency for this, moreover, we will not be able to discard all deletes in this cases, just deletes from a specific range (or, to simplify, from certain set of files - we can keep L0 deletes).

          Show
          Sergey Shelukhin added a comment - Your comment with regard to stripe appears to be correct; if we keep the same level of consistency for this, moreover, we will not be able to discard all deletes in this cases, just deletes from a specific range (or, to simplify, from certain set of files - we can keep L0 deletes).
          Hide
          Lars Hofhansl added a comment -

          You are right, I can't find it referencing the memstore either. Hmm.

          I wonder how it handles the scenario then when a Put with an older TS ts1 is in the memstore and a Delete with a TS ts2 > ts1 is in one of the store files. It seems it would incorrect discard the delete marker.

          Show
          Lars Hofhansl added a comment - You are right, I can't find it referencing the memstore either. Hmm. I wonder how it handles the scenario then when a Put with an older TS ts1 is in the memstore and a Delete with a TS ts2 > ts1 is in one of the store files. It seems it would incorrect discard the delete marker.
          Hide
          Sergey Shelukhin added a comment -

          This depends on your definition of "incorrectly"
          Given that there's no control over compaction trigger time in most normal scenarios, and no sync between puts and compactions at all strictly speaking, I view put-in-the-past to memstore -> major compaction and major compaction -> put-in-the-past to memstore as purely timing difference, not semantic difference. Unless user wants to manually sync every put and every major compaction.
          This patch will exacerbate the "timing" problem quite a bit so probably in this form it shouldn't be included (although scan type rename and separation from major in general is needed, I'll make a patch tomorrow).
          Now, for stripes it /seems/ the workaround (again, excluding memstore) will be relatively easy (there will have to be special handling in policy, compactor and maybe in top level scanner; essentially different file scanners will have different scan types; but this is from memory, need to check the code).
          But for level, if we implement it, we never have all files for any key.

          Show
          Sergey Shelukhin added a comment - This depends on your definition of "incorrectly" Given that there's no control over compaction trigger time in most normal scenarios, and no sync between puts and compactions at all strictly speaking, I view put-in-the-past to memstore -> major compaction and major compaction -> put-in-the-past to memstore as purely timing difference, not semantic difference. Unless user wants to manually sync every put and every major compaction. This patch will exacerbate the "timing" problem quite a bit so probably in this form it shouldn't be included (although scan type rename and separation from major in general is needed, I'll make a patch tomorrow). Now, for stripes it /seems/ the workaround (again, excluding memstore) will be relatively easy (there will have to be special handling in policy, compactor and maybe in top level scanner; essentially different file scanners will have different scan types; but this is from memory, need to check the code). But for level, if we implement it, we never have all files for any key.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Major compaction deals only with Store files.
          So when we have a put/delete happened for a row but if they are in memstore, trigger a major compaction.
          Doing a scan.raw() will give me both put and delete.

          But whereas when the major comapction is done after flushing the current memstore, the we don't get that. Faced once. I thought it as expected behaviour.

          Show
          ramkrishna.s.vasudevan added a comment - Major compaction deals only with Store files. So when we have a put/delete happened for a row but if they are in memstore, trigger a major compaction. Doing a scan.raw() will give me both put and delete. But whereas when the major comapction is done after flushing the current memstore, the we don't get that. Faced once. I thought it as expected behaviour.
          Hide
          Lars Hofhansl added a comment -

          The problem here would be the Delete in a store file and a Put with an older timestamp in the memstore.
          Now, the Put is marked deleted by the newer Delete. As normal scan will not return the Put, but the major compaction scan only looks at the store files and will happily remove the Delete and leave the Put around.

          Now backdating Puts is weird (I can always add a backdated Put to the memstore after the compaction finished), but in this case the compaction does have in principle all the information, and produce a correct state at the time it runs.

          Show
          Lars Hofhansl added a comment - The problem here would be the Delete in a store file and a Put with an older timestamp in the memstore. Now, the Put is marked deleted by the newer Delete. As normal scan will not return the Put, but the major compaction scan only looks at the store files and will happily remove the Delete and leave the Put around. Now backdating Puts is weird (I can always add a backdated Put to the memstore after the compaction finished), but in this case the compaction does have in principle all the information, and produce a correct state at the time it runs.
          Hide
          Matt Corgan added a comment -

          The problem here would be the Delete in a store file and a Put with an older timestamp in the memstore.

          I wish the BigTable paper didn't suggest this was an appropriate use case. Probably too late to rethink now, but seems contrary to the whole LSM architecture.

          Show
          Matt Corgan added a comment - The problem here would be the Delete in a store file and a Put with an older timestamp in the memstore. I wish the BigTable paper didn't suggest this was an appropriate use case. Probably too late to rethink now, but seems contrary to the whole LSM architecture.
          Hide
          Sergey Shelukhin added a comment -

          Updated patch to only separate the flag from major compaction, not changing the behavior.

          Show
          Sergey Shelukhin added a comment - Updated patch to only separate the flag from major compaction, not changing the behavior.
          Hide
          Sergey Shelukhin added a comment -

          but in this case the compaction does have in principle all the information, and produce a correct state at the time it runs.

          Not counting memstore... so if we similarly don't count a couple storefiles we are just extending the time window, not changing the behavior
          Although I agree practically this should stay.

          Show
          Sergey Shelukhin added a comment - but in this case the compaction does have in principle all the information, and produce a correct state at the time it runs. Not counting memstore... so if we similarly don't count a couple storefiles we are just extending the time window, not changing the behavior Although I agree practically this should stay.
          Hide
          Elliott Clark added a comment -

          I wish the BigTable paper didn't suggest this was an appropriate use case.

          Me too.

          Show
          Elliott Clark added a comment - I wish the BigTable paper didn't suggest this was an appropriate use case. Me too.
          Hide
          Jean-Marc Spaggiari added a comment -

          Sorry, I'm replying a bit late, but for this situation:
          Hfile 1:
          Put(ts = 2)

          Hfile 2:
          Put(ts = 4)
          Delete(ts = 2)
          Delete(ts = 1)

          Hfile 3:
          Put(ts = 1)

          When we compact the 2 HFile, maybe we can "simply" keep the last Delete (if any) instead of removing it? It will still be a useful compaction since it will remove a lot, and it will not break the logic for the coming Hfile3?

          Show
          Jean-Marc Spaggiari added a comment - Sorry, I'm replying a bit late, but for this situation: Hfile 1: Put(ts = 2) Hfile 2: Put(ts = 4) Delete(ts = 2) Delete(ts = 1) Hfile 3: Put(ts = 1) When we compact the 2 HFile, maybe we can "simply" keep the last Delete (if any) instead of removing it? It will still be a useful compaction since it will remove a lot, and it will not break the logic for the coming Hfile3?
          Hide
          Sergey Shelukhin added a comment -

          Jean-Marc Spaggiari yeah, that is the current behavior

          Show
          Sergey Shelukhin added a comment - Jean-Marc Spaggiari yeah, that is the current behavior
          Hide
          Sergey Shelukhin added a comment -

          Any comments on the latest patch? It essentially just renames the enum

          Show
          Sergey Shelukhin added a comment - Any comments on the latest patch? It essentially just renames the enum
          Hide
          Ted Yu added a comment -

          Lars Hofhansl:
          What do you think ?

          w.r.t. QA run, the fact that there was no QA report could mean the latest patch needs rebasing.

          Show
          Ted Yu added a comment - Lars Hofhansl : What do you think ? w.r.t. QA run, the fact that there was no QA report could mean the latest patch needs rebasing.
          Hide
          Sergey Shelukhin added a comment -

          attach same patch to trigger QA... rebased but it seems to be a no-op

          Show
          Sergey Shelukhin added a comment - attach same patch to trigger QA... rebased but it seems to be a no-op
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12571321/HBASE-7902-v1-.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 15 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.constraint.TestConstraint
          org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

          -1 core zombie tests. There are 1 zombie test(s):

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12571321/HBASE-7902-v1-.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 15 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.constraint.TestConstraint org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat -1 core zombie tests . There are 1 zombie test(s): Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4591//console This message is automatically generated.
          Hide
          Lars Hofhansl added a comment -

          This patch doesn't change any behavior, right?

          Show
          Lars Hofhansl added a comment - This patch doesn't change any behavior, right?
          Hide
          Lars Hofhansl added a comment -

          NM. You say so above. Looks good. +1

          Show
          Lars Hofhansl added a comment - NM. You say so above. Looks good. +1
          Hide
          Sergey Shelukhin added a comment -

          resolved as it was committed to trunk and 95

          Show
          Sergey Shelukhin added a comment - resolved as it was committed to trunk and 95
          Hide
          Jean-Marc Spaggiari added a comment -

          Can someone please update the "Fix Version/s:" field?
          Thanks.

          Show
          Jean-Marc Spaggiari added a comment - Can someone please update the "Fix Version/s:" field? Thanks.
          Hide
          stack added a comment -

          I tried to add you as a contributor so you could do this stuff Jean-Marc Spaggiari but for some reason you are not showing in the admin screen .... let me try again later.

          Show
          stack added a comment - I tried to add you as a contributor so you could do this stuff Jean-Marc Spaggiari but for some reason you are not showing in the admin screen .... let me try again later.
          Hide
          Jean-Marc Spaggiari added a comment -

          Oh! Like, a promotion? Cool. Keep me posted if it works.

          Show
          Jean-Marc Spaggiari added a comment - Oh! Like, a promotion? Cool. Keep me posted if it works.

            People

            • Assignee:
              Sergey Shelukhin
              Reporter:
              Sergey Shelukhin
            • Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development