Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-6427

Pluggable compaction and scan policies via coprocessors

    Details

    • Type: New Feature
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.94.2
    • Component/s: None
    • Labels:
      None

      Description

      When implementing higher level stores on top of HBase it is necessary to allow dynamic control over how long KVs must be kept around.
      Semi-static config options for ColumnFamilies (# of version or TTL) is not sufficient.

      This can be done with a few additional coprocessor hooks, or by makeing Store.ScanInfo pluggable.

      Was:
      The simplest way to achieve this is to have a pluggable class to determine the smallestReadpoint for Region. That way outside code can control what KVs to retain.

      1. 6427-0.94.txt
        58 kB
        Lars Hofhansl
      2. 6427-0.94-addendum.txt
        0.6 kB
        Lars Hofhansl
      3. 6427-notReady.txt
        26 kB
        Lars Hofhansl
      4. 6427-v1.txt
        28 kB
        Lars Hofhansl
      5. 6427-v10.txt
        59 kB
        Lars Hofhansl
      6. 6427-v2.txt
        37 kB
        Lars Hofhansl
      7. 6427-v3.txt
        44 kB
        Lars Hofhansl
      8. 6427-v4.txt
        49 kB
        Lars Hofhansl
      9. 6427-v5.txt
        49 kB
        Lars Hofhansl
      10. 6427-v7.txt
        58 kB
        Lars Hofhansl

        Issue Links

          Activity

          Hide
          lhofhansl Lars Hofhansl added a comment -

          Let me clarify what I mean by this:
          If I wanted to implement an MVCC based optimistic transaction engine on top of HBase I would naturally want to use HBase's built in versioning (where possible).
          In that case it is not clear a priori how many versions to keep or for how long (i.e. specifying VERSION/TTL is too static). The outside engine would need to determine that.
          The simplest of all approaches would be to do that via the smallestReadpoint in each region, by making its determination pluggable.

          Show
          lhofhansl Lars Hofhansl added a comment - Let me clarify what I mean by this: If I wanted to implement an MVCC based optimistic transaction engine on top of HBase I would naturally want to use HBase's built in versioning (where possible). In that case it is not clear a priori how many versions to keep or for how long (i.e. specifying VERSION/TTL is too static). The outside engine would need to determine that. The simplest of all approaches would be to do that via the smallestReadpoint in each region, by making its determination pluggable.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          The other scenario where this is useful is for M/R based incremental backups (as described here: http://hadoop-hbase.blogspot.com/2012/04/timestamp-consistent-backups-in-hbase.html).

          The backup tools can then control exactly what data to keep, while the backups are running.
          The actual plugged policy would probably coordinate via ZK.

          Show
          lhofhansl Lars Hofhansl added a comment - The other scenario where this is useful is for M/R based incremental backups (as described here: http://hadoop-hbase.blogspot.com/2012/04/timestamp-consistent-backups-in-hbase.html ). The backup tools can then control exactly what data to keep, while the backups are running. The actual plugged policy would probably coordinate via ZK.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Upon further scrutiny, that would actually not work, because any external code would have no knowledge about the internal memstoreTSs.

          Perhaps a better option would be to make the expiration policy of a column family pluggable. That way TTL and # versions could be controlled from the outside.

          Show
          lhofhansl Lars Hofhansl added a comment - Upon further scrutiny, that would actually not work, because any external code would have no knowledge about the internal memstoreTSs. Perhaps a better option would be to make the expiration policy of a column family pluggable. That way TTL and # versions could be controlled from the outside.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Yet another way of looking at is new coprocessor hook.
          That would be a hook that sits before the StoreScanner is created (in Store.internalFlushCache and Store.compact) and be passed the set of scanners to use, the store and whether this is a major compaction or not (in the compaction case). Then this hook could optionally return a scanner, and if non-null scanner is return that will be used for the flush/compaction.

          Now, there already are preFlush and preCompact hooks (interestingly the preFlush is at the region level, whereas the preCompact is at the store level, which is not quite right I think, I wonder whether we can change that), so I'm having a hard time naming these hooks accordingly. "preScannerFlush", "preScannerCompact" doesn't quite sound right.

          Andrew Purtell Do you have an opinion?

          Show
          lhofhansl Lars Hofhansl added a comment - Yet another way of looking at is new coprocessor hook. That would be a hook that sits before the StoreScanner is created (in Store.internalFlushCache and Store.compact) and be passed the set of scanners to use, the store and whether this is a major compaction or not (in the compaction case). Then this hook could optionally return a scanner, and if non-null scanner is return that will be used for the flush/compaction. Now, there already are preFlush and preCompact hooks (interestingly the preFlush is at the region level, whereas the preCompact is at the store level, which is not quite right I think, I wonder whether we can change that), so I'm having a hard time naming these hooks accordingly. "preScannerFlush", "preScannerCompact" doesn't quite sound right. Andrew Purtell Do you have an opinion?
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Here's an initial idea for a patch.
          For this to work coprocessors need to have access to KeyValueScanner, StoreScanner, etc. So that is violating/leaking some of the previous abstractions.

          This patch would allow a coprocessor to actually produce a scanner for a flush or a compaction, while not requiring it to reimplement all the logic.
          A coprocessor can now in fact just override the TTL and/or # versions for a flush/compaction.

          Please let me know what you think and how to improve this.
          And for the love of god, please think of better names for the two new hooks.

          Show
          lhofhansl Lars Hofhansl added a comment - Here's an initial idea for a patch. For this to work coprocessors need to have access to KeyValueScanner, StoreScanner, etc. So that is violating/leaking some of the previous abstractions. This patch would allow a coprocessor to actually produce a scanner for a flush or a compaction, while not requiring it to reimplement all the logic. A coprocessor can now in fact just override the TTL and/or # versions for a flush/compaction. Please let me know what you think and how to improve this. And for the love of god, please think of better names for the two new hooks.
          Hide
          apurtell Andrew Purtell added a comment -

          Or use polymorphism?

          +  public InternalScanner preFlush(final ObserverContext<RegionCoprocessorEnvironment> c,
          +      Store store, KeyValueScanner scanner) throws IOException;
          +
          +  @Deprecated
             public void preFlush(ObserverContext<RegionCoprocessorEnvironment> e) throws IOException;
          
          +  public InternalScanner preCompact(final ObserverContext<RegionCoprocessorEnvironment> c,
          +      Store store, List<? extends KeyValueScanner> scanners, ScanType scanType, long earliestPutTs)
          +      throws IOException;
          +
          +  @Deprecated
             public void preCompact(ObserverContext<RegionCoprocessorEnvironment> e
          ...
          
          Show
          apurtell Andrew Purtell added a comment - Or use polymorphism? + public InternalScanner preFlush(final ObserverContext<RegionCoprocessorEnvironment> c, + Store store, KeyValueScanner scanner) throws IOException; + + @Deprecated public void preFlush(ObserverContext<RegionCoprocessorEnvironment> e) throws IOException; + public InternalScanner preCompact(final ObserverContext<RegionCoprocessorEnvironment> c, + Store store, List<? extends KeyValueScanner> scanners, ScanType scanType, long earliestPutTs) + throws IOException; + + @Deprecated public void preCompact(ObserverContext<RegionCoprocessorEnvironment> e ...
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Uh... I like that.

          Show
          lhofhansl Lars Hofhansl added a comment - Uh... I like that.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          New patch

          Show
          lhofhansl Lars Hofhansl added a comment - New patch
          Hide
          lhofhansl Lars Hofhansl added a comment -

          The part I still have think through is how to handle actual use scans. Be default a user scan will also filter TTL/Versions, so it's one thing to prevent the KVs from being compacted away and another to actually make them visible to user scans.
          A similar approach can be followed in preScannerOpen, as long as the coprocessor has enough access to internal region data structure to rebuild the default scanner.

          Show
          lhofhansl Lars Hofhansl added a comment - The part I still have think through is how to handle actual use scans. Be default a user scan will also filter TTL/Versions, so it's one thing to prevent the KVs from being compacted away and another to actually make them visible to user scans. A similar approach can be followed in preScannerOpen, as long as the coprocessor has enough access to internal region data structure to rebuild the default scanner.
          Hide
          apurtell Andrew Purtell added a comment -

          BaseRegionObserver should reimplement the default behavior in the new methods? Anybody who inherits would get it.

          Show
          apurtell Andrew Purtell added a comment - BaseRegionObserver should reimplement the default behavior in the new methods? Anybody who inherits would get it.
          Hide
          apurtell Andrew Purtell added a comment -

          Or, better yet, BaseRegionObserver calls out to a Store static method that does it, with some javadoc to make it clear what's going on?

          Show
          apurtell Andrew Purtell added a comment - Or, better yet, BaseRegionObserver calls out to a Store static method that does it, with some javadoc to make it clear what's going on?
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Hmm... The default behavior is followed when RegionObserver.pre

          {flush|compact}

          return a null scanner, which is what BaseRegionObserver does by default.
          BaseRegionObserver implementing the default behavior would not really buy anything (unless I am missing something).

          As for last comment above, I think we'd need a preStoreScannerOpen, which would be called in Store.getScanner (right before the new StoreScanner is created) to allow the coprocessor to return a custom scanner here too.

          Show
          lhofhansl Lars Hofhansl added a comment - Hmm... The default behavior is followed when RegionObserver.pre {flush|compact} return a null scanner, which is what BaseRegionObserver does by default. BaseRegionObserver implementing the default behavior would not really buy anything (unless I am missing something). As for last comment above, I think we'd need a preStoreScannerOpen, which would be called in Store.getScanner (right before the new StoreScanner is created) to allow the coprocessor to return a custom scanner here too.
          Hide
          apurtell Andrew Purtell added a comment -

          The default behavior is followed when RegionObserver.pre{flush|compact} return a null scanner, which is what BaseRegionObserver does by default.

          Fine, I was misled by the unit test code.

          I think we'd need a preStoreScannerOpen, which would be called in Store.getScanner (right before the new StoreScanner is created) to allow the coprocessor to return a custom scanner here too.

          Sounds good to me.

          Show
          apurtell Andrew Purtell added a comment - The default behavior is followed when RegionObserver.pre{flush|compact} return a null scanner, which is what BaseRegionObserver does by default. Fine, I was misled by the unit test code. I think we'd need a preStoreScannerOpen, which would be called in Store.getScanner (right before the new StoreScanner is created) to allow the coprocessor to return a custom scanner here too. Sounds good to me.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Of course this has to be compared to simply making the ScanInfo pluggable in Store.java.

          What I want to achieve here is to have an external process (backup tool, transaction engine, etc) to be able to override HBase's default TTL/#Versions with very high fidelity (i.e. not via a dynamic schema change, which is too heavyweight/slow).

          The coprocessor approach is nice, because it provides a lot of flexibility for other future use cases and it does not invent a new concept. At the same time it adds complexity.

          Show
          lhofhansl Lars Hofhansl added a comment - Of course this has to be compared to simply making the ScanInfo pluggable in Store.java. What I want to achieve here is to have an external process (backup tool, transaction engine, etc) to be able to override HBase's default TTL/#Versions with very high fidelity (i.e. not via a dynamic schema change, which is too heavyweight/slow). The coprocessor approach is nice, because it provides a lot of flexibility for other future use cases and it does not invent a new concept. At the same time it adds complexity.
          Hide
          apurtell Andrew Purtell added a comment -

          The coprocessor approach is nice, because it provides a lot of flexibility for other future use cases and it does not invent a new concept. At the same time it adds complexity.

          On balance the API change here is nice because it extends something that was too limited to address your use case such that now it works for you, and it also admits the possibility of others.

          Show
          apurtell Andrew Purtell added a comment - The coprocessor approach is nice, because it provides a lot of flexibility for other future use cases and it does not invent a new concept. At the same time it adds complexity. On balance the API change here is nice because it extends something that was too limited to address your use case such that now it works for you, and it also admits the possibility of others.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Add preStoreScannerOpen(...) to RegionObserver and related classes.

          Runs all of TestFromClientSide and TestCompaction with such a coprocessor.

          Show
          lhofhansl Lars Hofhansl added a comment - Add preStoreScannerOpen(...) to RegionObserver and related classes. Runs all of TestFromClientSide and TestCompaction with such a coprocessor.
          Hide
          apurtell Andrew Purtell added a comment -

          lgtm, good tests

          Show
          apurtell Andrew Purtell added a comment - lgtm, good tests
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12538075/6427-v2.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 24 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 14 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient
          org.apache.hadoop.hbase.client.TestFromClientSide
          org.apache.hadoop.hbase.client.TestAdmin
          org.apache.hadoop.hbase.catalog.TestMetaReaderEditor

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12538075/6427-v2.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 24 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 14 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient org.apache.hadoop.hbase.client.TestFromClientSide org.apache.hadoop.hbase.client.TestAdmin org.apache.hadoop.hbase.catalog.TestMetaReaderEditor Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2440//console This message is automatically generated.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          I ran the failing tests locally, and they all pass.
          Will sit on this a bit longer, write a test that tests the actual scenario I am interested in, etc.

          Show
          lhofhansl Lars Hofhansl added a comment - I ran the failing tests locally, and they all pass. Will sit on this a bit longer, write a test that tests the actual scenario I am interested in, etc.
          Hide
          lhofhansl Lars Hofhansl added a comment - - edited

          v3... Fixes some things, and verifies the new scenarios.
          Still not quite done, yet, just need a place to "park" it.
          preStoreScannerOpen will be called for Gets (which is good), including internal Gets (or append/increment/delete/etc). I think this is OK in all cases, but need to make sure.

          Edit: Spelling

          Show
          lhofhansl Lars Hofhansl added a comment - - edited v3... Fixes some things, and verifies the new scenarios. Still not quite done, yet, just need a place to "park" it. preStoreScannerOpen will be called for Gets (which is good), including internal Gets (or append/increment/delete/etc). I think this is OK in all cases, but need to make sure. Edit: Spelling
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12538127/6427-v3.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 27 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 14 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.replication.TestReplication
          org.apache.hadoop.hbase.master.TestAssignmentManager

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12538127/6427-v3.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 27 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 14 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.replication.TestReplication org.apache.hadoop.hbase.master.TestAssignmentManager Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2445//console This message is automatically generated.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          This has all the functionality I want.
          The new TTL test is subject to races if the test env is really slow.

          Will play with EnvironmentEdgeManager to control time for these test to avoid these races.

          Please review the other parts of the code.
          Is there some better refactoring I can do to StoreScanner in order to make this a bit easier on anybody who wants override some functionality?

          Show
          lhofhansl Lars Hofhansl added a comment - This has all the functionality I want. The new TTL test is subject to races if the test env is really slow. Will play with EnvironmentEdgeManager to control time for these test to avoid these races. Please review the other parts of the code. Is there some better refactoring I can do to StoreScanner in order to make this a bit easier on anybody who wants override some functionality?
          Hide
          lhofhansl Lars Hofhansl added a comment -
          • used EnvironmentEdge in TestCoprocessorScanPolicy to avoid races.
          • added ScanType which was missing in v4
          Show
          lhofhansl Lars Hofhansl added a comment - used EnvironmentEdge in TestCoprocessorScanPolicy to avoid races. added ScanType which was missing in v4
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -

          Some new files need license header.

          +public class ScanPolicyCoprocessor extends BaseRegionObserver {
          

          This class is an observer. Suggest renaming the class.
          Annotations for audience and stability should be added.

          For RegionCoprocessorHost.preCompact():

          +          scanner = ((RegionObserver) env.getInstance()).preCompact(ctx, store, scanners,
          +              scanType, earliestPutTs);
          

          If there're multiple RegionObserver's, it seems only the final returned scanner would be returned. Is this intentional ?
          Similar observation for preStoreScannerOpen() and preFlush()

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - Some new files need license header. + public class ScanPolicyCoprocessor extends BaseRegionObserver { This class is an observer. Suggest renaming the class. Annotations for audience and stability should be added. For RegionCoprocessorHost.preCompact(): + scanner = ((RegionObserver) env.getInstance()).preCompact(ctx, store, scanners, + scanType, earliestPutTs); If there're multiple RegionObserver's, it seems only the final returned scanner would be returned. Is this intentional ? Similar observation for preStoreScannerOpen() and preFlush()
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Thanks Ted. You are right on all counts. Need to think about multiple coprocessors a bit more.

          Show
          lhofhansl Lars Hofhansl added a comment - Thanks Ted. You are right on all counts. Need to think about multiple coprocessors a bit more.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          The current preCompact/preScannerOpen/other hooks handle this by passing the scanner from the previous coprocessor to the next one, so that each coprocessor has all the information needed.
          I could do something similar here, although it would quickly get inscrutable for an implemented of these hooks; but I cannot think of anything better.

          Show
          lhofhansl Lars Hofhansl added a comment - The current preCompact/preScannerOpen/other hooks handle this by passing the scanner from the previous coprocessor to the next one, so that each coprocessor has all the information needed. I could do something similar here, although it would quickly get inscrutable for an implemented of these hooks; but I cannot think of anything better.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12538229/6427-v5.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 33 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 5 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster
          org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort
          org.apache.hadoop.hbase.master.TestAssignmentManager

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12538229/6427-v5.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 33 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 5 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort org.apache.hadoop.hbase.master.TestAssignmentManager Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2450//console This message is automatically generated.
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -
          +          scanner = ((RegionObserver) env.getInstance()).preCompact(ctx, store, scanners,
          +              scanType, earliestPutTs);
          

          Maybe insert the returned scanner (if not null) into scanners ?

          +   * @deprecated use {@link #preCompact(ObserverContext, Store, List, ScanType, long)} instead
              */
             InternalScanner preCompact(final ObserverContext<RegionCoprocessorEnvironment> c,
                 final Store store, final InternalScanner scanner) throws IOException;
          

          Do we have to deprecate the existing API ? I feel the new API is much more involved in terms of technical internals. Maybe a poll on user mailing list would help clarify.

          +   * @param scanners the list {@link StoreFileScanner}s to be read from
          ...
          +  InternalScanner preCompact(final ObserverContext<RegionCoprocessorEnvironment> c,
          +      final Store store, List<? extends KeyValueScanner> scanners, ScanType scanType, long earliestPutTs)
          

          nit: the second line above is over 100 characters.
          If scanners really should be StoreFileScanner's, method signature should match expectation.
          See the following in RegionCoprocessorHost.java:

          +   * See {@link RegionObserver#preCompact(ObserverContext, Store, InternalScanner)}
          +   */
          +  public InternalScanner preCompact(Store store, List<StoreFileScanner> scanners,
          +      ScanType scanType, long earliestPutTs) throws IOException {
          

          Review board would facilitate more detailed review.

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - + scanner = ((RegionObserver) env.getInstance()).preCompact(ctx, store, scanners, + scanType, earliestPutTs); Maybe insert the returned scanner (if not null) into scanners ? + * @deprecated use {@link #preCompact(ObserverContext, Store, List, ScanType, long )} instead */ InternalScanner preCompact( final ObserverContext<RegionCoprocessorEnvironment> c, final Store store, final InternalScanner scanner) throws IOException; Do we have to deprecate the existing API ? I feel the new API is much more involved in terms of technical internals. Maybe a poll on user mailing list would help clarify. + * @param scanners the list {@link StoreFileScanner}s to be read from ... + InternalScanner preCompact( final ObserverContext<RegionCoprocessorEnvironment> c, + final Store store, List<? extends KeyValueScanner> scanners, ScanType scanType, long earliestPutTs) nit: the second line above is over 100 characters. If scanners really should be StoreFileScanner's, method signature should match expectation. See the following in RegionCoprocessorHost.java: + * See {@link RegionObserver#preCompact(ObserverContext, Store, InternalScanner)} + */ + public InternalScanner preCompact(Store store, List<StoreFileScanner> scanners, + ScanType scanType, long earliestPutTs) throws IOException { Review board would facilitate more detailed review.
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -

          Looking at the changes to Compactor.compact(), the new preCompact() takes precedence over existing preCompact() method.
          This should be documented.

          My comment above about scanner insertion was not valid.
          It is not clear why more than one RegionObserver would return InternalScanner from preCompact(). For simplicity, we can break out of the loop in RegionCoprocessorHost.preCompact() when a non-null InternalScanner is returned.
          This behavior should also be documented.

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - Looking at the changes to Compactor.compact(), the new preCompact() takes precedence over existing preCompact() method. This should be documented. My comment above about scanner insertion was not valid. It is not clear why more than one RegionObserver would return InternalScanner from preCompact(). For simplicity, we can break out of the loop in RegionCoprocessorHost.preCompact() when a non-null InternalScanner is returned. This behavior should also be documented.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          I think we should what the current preScannerOpen is doing. Each RegionObserver is passed the previous InternalScanner (would be a KeyValueScanner here, but the same principle applies). A RegionObserver can break the loop via the passed Context, I do not think we should default that bevahior. I'll have a patch for that soon.

          As for the deprecation, I think I agree with Andy here. Having the two hooks is confusion. Everything that could be done with the old can also be done with the new hook, and coprocessors are meant for extending HBase.

          Re: StoreFileScanner vs ? extends KeyValueScanner... I typically prefer to express this in terms interface, rather than concrete classes. It also keep preCompact and preFlush similar (one gets a list of StoreFileScanners, the other gets a StoreScanner). I do not feel strongly about this, though.

          I'll change the patch and put it up on RB... Thanks for the detailed review Ted!

          Show
          lhofhansl Lars Hofhansl added a comment - I think we should what the current preScannerOpen is doing. Each RegionObserver is passed the previous InternalScanner (would be a KeyValueScanner here, but the same principle applies). A RegionObserver can break the loop via the passed Context, I do not think we should default that bevahior. I'll have a patch for that soon. As for the deprecation, I think I agree with Andy here. Having the two hooks is confusion. Everything that could be done with the old can also be done with the new hook, and coprocessors are meant for extending HBase. Re: StoreFileScanner vs ? extends KeyValueScanner... I typically prefer to express this in terms interface, rather than concrete classes. It also keep preCompact and preFlush similar (one gets a list of StoreFileScanners, the other gets a StoreScanner). I do not feel strongly about this, though. I'll change the patch and put it up on RB... Thanks for the detailed review Ted!
          Hide
          apurtell Andrew Purtell added a comment -

          Agree that scanners should be passed along. The use case is subsequent observers wrapping scanners created by those earlier in the chain.

          Also we should deprecate and remove the older more limited hooks now that we have a superset interface that admits more possibilities.

          Show
          apurtell Andrew Purtell added a comment - Agree that scanners should be passed along. The use case is subsequent observers wrapping scanners created by those earlier in the chain. Also we should deprecate and remove the older more limited hooks now that we have a superset interface that admits more possibilities.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Interestingly I would like to have the ability to bypass the default action from both preFlush and preCompact, for example to control how the store files are written.
          With the new hooks there is no way to indicate that (null means "create the default scanner", non-null means use the returned scanner, but still follow the default action).

          The hook just prior to creating the scanner could create a new scanner (and hence decide how to filter the inputs) the hook right after scanner creation could then control how/where to write the store files.

          So maybe have a preCompactScannerOpen, and preFlushScannerOpen (similar to my initial idea), and not deprecating the existing hooks?

          Show
          lhofhansl Lars Hofhansl added a comment - Interestingly I would like to have the ability to bypass the default action from both preFlush and preCompact, for example to control how the store files are written. With the new hooks there is no way to indicate that (null means "create the default scanner", non-null means use the returned scanner, but still follow the default action). The hook just prior to creating the scanner could create a new scanner (and hence decide how to filter the inputs) the hook right after scanner creation could then control how/where to write the store files. So maybe have a preCompactScannerOpen, and preFlushScannerOpen (similar to my initial idea), and not deprecating the existing hooks?
          Hide
          apurtell Andrew Purtell added a comment -

          Then you want to restore this behavior:

          // NULL scanner returned from coprocessor hooks means skip normal processing
          

          Can that work?

          I'd not be opposed to adding additional hooks but that should be after exhausting other options here, IMHO, since they would be "close" to each other.

          We could pass in the default StoreScanner. The hook could just return it if wanting default behavior. Might need to make StoreScanner lazy, move initialization out of the constructor. I'm remote and just have your patch to go by at the moment.

          Show
          apurtell Andrew Purtell added a comment - Then you want to restore this behavior: // NULL scanner returned from coprocessor hooks means skip normal processing Can that work? I'd not be opposed to adding additional hooks but that should be after exhausting other options here, IMHO, since they would be "close" to each other. We could pass in the default StoreScanner. The hook could just return it if wanting default behavior. Might need to make StoreScanner lazy, move initialization out of the constructor. I'm remote and just have your patch to go by at the moment.
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -

          null means "create the default scanner"

          Maybe create a special class (called NullScanner ?) implementing InternalScanner. The class provides a singleton which can be returned by preCompact() to indicate skipping normal processing.

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - null means "create the default scanner" Maybe create a special class (called NullScanner ?) implementing InternalScanner. The class provides a singleton which can be returned by preCompact() to indicate skipping normal processing.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          After reviewing all the use cases I have this seems to be best proposal:

          • add these
            void postFlush(final ObserverContext<RegionCoprocessorEnvironment> c, final Store store, final StoreFile resultFile) throws IOException;
            InternalScanner preFlush(final ObserverContext<RegionCoprocessorEnvironment> c, final Store store, final InternalScanner scanner) throws IOException;
            InternalScanner preFlushScannerOpen(final ObserverContext<RegionCoprocessorEnvironment> c, final Store store, final KeyValueScanner memstoreScanner, final InternalScanner s) throws IOException;
            InternalScanner preCompactScannerOpen(final ObserverContext<RegionCoprocessorEnvironment> c, final Store store, List<? extends KeyValueScanner> scanners, final ScanType scanType, final long earliestPutTs, final InternalScanner s) throws IOException;
            KeyValueScanner preStoreScannerOpen(final ObserverContext<RegionCoprocessorEnvironment> c, final Store store, final Scan scan, final NavigableSet<byte[]> targetCols, final KeyValueScanner s) throws IOException;
            
          • deprecate these:
            void postFlush(final ObserverContext<RegionCoprocessorEnvironment> c) throws IOException;
            void preFlush(final ObserverContext<RegionCoprocessorEnvironment> c) throws IOException;
            

          The new

          {pre|post}Flush are called per Store in analogy to {pre|post}

          Compact.
          pre

          {Flush|Compact}

          ScannerOpen are called before the flush/compaction scanner is built.

          This is give maximum flexibility (I can control the reading scanners and how the storefiles are written for both flushes and compactions), makes more sense of

          {pre|post}

          Flush and leave existing functionality in place.

          This is the first proposal that really "feels" right to me. I'll have a patch for that soon.

          Show
          lhofhansl Lars Hofhansl added a comment - After reviewing all the use cases I have this seems to be best proposal: add these void postFlush( final ObserverContext<RegionCoprocessorEnvironment> c, final Store store, final StoreFile resultFile) throws IOException; InternalScanner preFlush( final ObserverContext<RegionCoprocessorEnvironment> c, final Store store, final InternalScanner scanner) throws IOException; InternalScanner preFlushScannerOpen( final ObserverContext<RegionCoprocessorEnvironment> c, final Store store, final KeyValueScanner memstoreScanner, final InternalScanner s) throws IOException; InternalScanner preCompactScannerOpen( final ObserverContext<RegionCoprocessorEnvironment> c, final Store store, List<? extends KeyValueScanner> scanners, final ScanType scanType, final long earliestPutTs, final InternalScanner s) throws IOException; KeyValueScanner preStoreScannerOpen( final ObserverContext<RegionCoprocessorEnvironment> c, final Store store, final Scan scan, final NavigableSet< byte []> targetCols, final KeyValueScanner s) throws IOException; deprecate these: void postFlush( final ObserverContext<RegionCoprocessorEnvironment> c) throws IOException; void preFlush( final ObserverContext<RegionCoprocessorEnvironment> c) throws IOException; The new {pre|post}Flush are called per Store in analogy to {pre|post} Compact. pre {Flush|Compact} ScannerOpen are called before the flush/compaction scanner is built. This is give maximum flexibility (I can control the reading scanners and how the storefiles are written for both flushes and compactions), makes more sense of {pre|post} Flush and leave existing functionality in place. This is the first proposal that really "feels" right to me. I'll have a patch for that soon.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Here's a patch for that.
          I'll also put that up on RB.

          Show
          lhofhansl Lars Hofhansl added a comment - Here's a patch for that. I'll also put that up on RB.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12538259/6427-v7.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 33 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 5 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.master.TestMasterNoCluster

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12538259/6427-v7.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 33 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 5 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.master.TestMasterNoCluster Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2451//console This message is automatically generated.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          I'd not be opposed to adding additional hooks but that should be after exhausting other options here, IMHO, since they would be "close" to each other.

          Do you think the latest proposal is too heavy handed? pre

          {Flush|Compact}ScannerOpen would be quite close to pre{Flush|Compact}

          . My reasoning was that an implementer could still override the relatively simple pre

          {Flush|Compact}

          hooks.
          If these are too many, we can still have only the fewer hooks, and then we'd need some "NullScanner" approach I think.

          Show
          lhofhansl Lars Hofhansl added a comment - I'd not be opposed to adding additional hooks but that should be after exhausting other options here, IMHO, since they would be "close" to each other. Do you think the latest proposal is too heavy handed? pre {Flush|Compact}ScannerOpen would be quite close to pre{Flush|Compact} . My reasoning was that an implementer could still override the relatively simple pre {Flush|Compact} hooks. If these are too many, we can still have only the fewer hooks, and then we'd need some "NullScanner" approach I think.
          Hide
          apurtell Andrew Purtell added a comment -

          It's fine. I like how you made the new APIs about opening (internal) scanners for various things.

          Show
          apurtell Andrew Purtell added a comment - It's fine. I like how you made the new APIs about opening (internal) scanners for various things.
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -

          Review board doesn't show correct formatting. So I put the following here.

          to make a new Interface extending both InternalScanner and KeyValueScanner

          I like the above approach.
          Currently we have:

          $ find src/main -name '*.java' -exec grep 'ments.*InternalScanner' {} \; -print
           * also implements InternalScanner.  WARNING: As is, if you try to use this
              implements KeyValueScanner, InternalScanner {
          src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueHeap.java
              implements KeyValueScanner, InternalScanner, ChangedReadersObserver {
          src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
          

          Combining InternalScanner and KeyValueScanner seems natural.

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - Review board doesn't show correct formatting. So I put the following here. to make a new Interface extending both InternalScanner and KeyValueScanner I like the above approach. Currently we have: $ find src/main -name '*.java' -exec grep 'ments.*InternalScanner' {} \; -print * also implements InternalScanner. WARNING: As is, if you try to use this implements KeyValueScanner, InternalScanner { src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueHeap.java implements KeyValueScanner, InternalScanner, ChangedReadersObserver { src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java Combining InternalScanner and KeyValueScanner seems natural.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Thinking on this more... I think the pre

          {Flush|Compact}

          ScannerOpen hooks should just continue to return InternalScanner. Only that interface is needed by downstream code and we should not extend this only so that coprocessor chaining becomes simpler.
          As it stands these hooks can use the passed InternalScanner, but it needs some understanding of HBase... Which is needed anyway to correctly deal with chained scanners for flush or compactions.

          I.e. I propose leaving it with what the current patch provides. Not opposed to filing a separate ticket to bring more sense into the various scanner interfaces we have.

          Show
          lhofhansl Lars Hofhansl added a comment - Thinking on this more... I think the pre {Flush|Compact} ScannerOpen hooks should just continue to return InternalScanner. Only that interface is needed by downstream code and we should not extend this only so that coprocessor chaining becomes simpler. As it stands these hooks can use the passed InternalScanner, but it needs some understanding of HBase... Which is needed anyway to correctly deal with chained scanners for flush or compactions. I.e. I propose leaving it with what the current patch provides. Not opposed to filing a separate ticket to bring more sense into the various scanner interfaces we have.
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -

          to correctly deal with chained scanners

          I personally haven't seen chained scanners in action.

          If we don't think through how chained scanners work, I wouldn't expect HBase users to use this mechanism.

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - to correctly deal with chained scanners I personally haven't seen chained scanners in action. If we don't think through how chained scanners work, I wouldn't expect HBase users to use this mechanism.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          It's used rarely, because coprocessors are not like store procedures, but a method to extend HBase.

          We have through it (IMHO), and the coprocessors now can be chained, and due to the nature of these hooks that will be tricky.
          For example even if we managed to pass in a KeyValueScanner instance, folks would be tempted to just add this one to the List of scanners down the chain (as was your first thought above), which is tempting, but will not be the right thing to do; the downstream must do some complicated logic to merge with the previous scanner in ways that we cannot anticipate.

          Show
          lhofhansl Lars Hofhansl added a comment - It's used rarely, because coprocessors are not like store procedures, but a method to extend HBase. We have through it (IMHO), and the coprocessors now can be chained, and due to the nature of these hooks that will be tricky. For example even if we managed to pass in a KeyValueScanner instance, folks would be tempted to just add this one to the List of scanners down the chain (as was your first thought above), which is tempting, but will not be the right thing to do; the downstream must do some complicated logic to merge with the previous scanner in ways that we cannot anticipate.
          Hide
          apurtell Andrew Purtell added a comment -

          I personally haven't seen chained scanners in action. If we don't think through how chained scanners work, I wouldn't expect HBase users to use this mechanism.

          Ted, this is a great comment, because I think it illustrates an incorrect approach to CP API design. (Not meant to be a criticism of you. ) CPs are targeted as much for HBase developers as they might be for users. The fundamental goal of CPs is to avoid needing to patch core HBase code for implementing new features or researching design alternatives.

          Show
          apurtell Andrew Purtell added a comment - I personally haven't seen chained scanners in action. If we don't think through how chained scanners work, I wouldn't expect HBase users to use this mechanism. Ted, this is a great comment, because I think it illustrates an incorrect approach to CP API design. (Not meant to be a criticism of you. ) CPs are targeted as much for HBase developers as they might be for users. The fundamental goal of CPs is to avoid needing to patch core HBase code for implementing new features or researching design alternatives.
          Hide
          apurtell Andrew Purtell added a comment -

          Lars Hofhansl A follow up JIRA to discuss refactoring the internal scanner interfaces would be reasonable. We do want for extensions to be able to wrap and merge scanners along CP chains.

          Show
          apurtell Andrew Purtell added a comment - Lars Hofhansl A follow up JIRA to discuss refactoring the internal scanner interfaces would be reasonable. We do want for extensions to be able to wrap and merge scanners along CP chains.
          Hide
          apurtell Andrew Purtell added a comment -

          And a follow up note to my above comment: It is a goal to provide adequate extension surface (and I include here besides CPs also other pluggable interfaces where performance considerations are paramount) so new features or design alternatives require no core code patches. IMHO, the design strategy for this goal should be incremental, driven by actual use cases, but with foresight added. This issue is a good example of that I think, Lars has really improved flush and compaction hooks, and this work hasn't been done in the abstract.

          Show
          apurtell Andrew Purtell added a comment - And a follow up note to my above comment: It is a goal to provide adequate extension surface (and I include here besides CPs also other pluggable interfaces where performance considerations are paramount) so new features or design alternatives require no core code patches. IMHO, the design strategy for this goal should be incremental, driven by actual use cases, but with foresight added. This issue is a good example of that I think, Lars has really improved flush and compaction hooks, and this work hasn't been done in the abstract.
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -

          I understand the goal for this JIRA and am in support of it.

          the downstream must do some complicated logic to merge with the previous scanner in ways that we cannot anticipate.

          This illustrates the intricacies of scanner chaining. If a scanner is designed for some specific purpose, I wouldn't expect it to function correctly when an arbitrary number of scanners are chained (both) upstream and downstream.
          In fact, chaining introduces unnecessary burden on individual scanner.

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - I understand the goal for this JIRA and am in support of it. the downstream must do some complicated logic to merge with the previous scanner in ways that we cannot anticipate. This illustrates the intricacies of scanner chaining. If a scanner is designed for some specific purpose, I wouldn't expect it to function correctly when an arbitrary number of scanners are chained (both) upstream and downstream. In fact, chaining introduces unnecessary burden on individual scanner.
          Hide
          apurtell Andrew Purtell added a comment -

          I understand the goal for this JIRA and am in support of it. [...] This illustrates the intricacies of scanner chaining. If a scanner is designed for some specific purpose, I wouldn't expect it to function correctly when an arbitrary number of scanners are chained (both) upstream and downstream.

          Pardon Ted but I think this is an X-Y argument. I'm not sure we are discussing the chaining of arbitrary scanners (unless I have this wrong.) Each CP hook is dealing with a constrained set of scanners. Flushes will be dealing with memstore scanners. Compactions will be dealing with store scanners. The question has been what kind of interface should input parameters and return types share, there's some design give-and-take there. But we are not talking about, for example, combining memstore scanners with store scanners. (At least, I am not.)

          Show
          apurtell Andrew Purtell added a comment - I understand the goal for this JIRA and am in support of it. [...] This illustrates the intricacies of scanner chaining. If a scanner is designed for some specific purpose, I wouldn't expect it to function correctly when an arbitrary number of scanners are chained (both) upstream and downstream. Pardon Ted but I think this is an X-Y argument. I'm not sure we are discussing the chaining of arbitrary scanners (unless I have this wrong.) Each CP hook is dealing with a constrained set of scanners. Flushes will be dealing with memstore scanners. Compactions will be dealing with store scanners. The question has been what kind of interface should input parameters and return types share, there's some design give-and-take there. But we are not talking about, for example, combining memstore scanners with store scanners. (At least, I am not.)
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -

          There is no intersection between memstore scanners and store scanners.

          The arbitrary number of scanners is due to the following construct:

          +    for (RegionEnvironment env: coprocessors) {
          +      if (env.getInstance() instanceof RegionObserver) {
          +        ctx = ObserverContext.createAndPrepare(env, ctx);
          +        try {
          +          s = ((RegionObserver) env.getInstance()).preCompactScannerOpen(ctx, store, scanners,
          +              scanType, earliestPutTs, s);
          

          I think we should reduce the complexity (due to scanner chaining) for preCompactScannerOpen().
          If we don't provide a working model for scanner chaining, there is no need to introduce construct for chaining scanners.

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - There is no intersection between memstore scanners and store scanners. The arbitrary number of scanners is due to the following construct: + for (RegionEnvironment env: coprocessors) { + if (env.getInstance() instanceof RegionObserver) { + ctx = ObserverContext.createAndPrepare(env, ctx); + try { + s = ((RegionObserver) env.getInstance()).preCompactScannerOpen(ctx, store, scanners, + scanType, earliestPutTs, s); I think we should reduce the complexity (due to scanner chaining) for preCompactScannerOpen(). If we don't provide a working model for scanner chaining, there is no need to introduce construct for chaining scanners.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          This illustrates the intricacies of scanner chaining.

          I think that is a false transitive statement.
          For this particular hook chaining is hard, because this is an intricate part of the HBase code. That does not mean that chaining is generally hard (it is not).

          I wouldn't expect it to function correctly when an arbitrary number of scanners are chained (both) upstream and downstream.

          Same here... The point is: It is possible to chain region observers even in this case.
          If that is not desired an implementer can break the chain (via the passed context. and by ignoring the InternalScanner argument).

          Show
          lhofhansl Lars Hofhansl added a comment - This illustrates the intricacies of scanner chaining. I think that is a false transitive statement. For this particular hook chaining is hard, because this is an intricate part of the HBase code. That does not mean that chaining is generally hard (it is not). I wouldn't expect it to function correctly when an arbitrary number of scanners are chained (both) upstream and downstream. Same here... The point is: It is possible to chain region observers even in this case. If that is not desired an implementer can break the chain (via the passed context. and by ignoring the InternalScanner argument).
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Also another thought: I wonder how hard it is to fold custom timestamps into this. An example of what I mean is time dimension that uses monotonically increasing transaction numbers.
          A slight API change to the StoreScanner constructor would make it possible to handle that too: By passing in the absolute "time" at which a KV expires rather than the TTL (which it then internally translates to an absolute time anyway, and which in turn depends on the RegionServer's understanding of time, which is not configurable).

          Show
          lhofhansl Lars Hofhansl added a comment - Also another thought: I wonder how hard it is to fold custom timestamps into this. An example of what I mean is time dimension that uses monotonically increasing transaction numbers. A slight API change to the StoreScanner constructor would make it possible to handle that too: By passing in the absolute "time" at which a KV expires rather than the TTL (which it then internally translates to an absolute time anyway, and which in turn depends on the RegionServer's understanding of time, which is not configurable).
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Re: Ted's last comment. So you want to enforce that only a single coprocessor can implement these hooks? I would disagree with that.
          The chaining is fine (if complicated), no need to mandate what an implementer can and cannot do (see my previous comment).

          Show
          lhofhansl Lars Hofhansl added a comment - Re: Ted's last comment. So you want to enforce that only a single coprocessor can implement these hooks? I would disagree with that. The chaining is fine (if complicated), no need to mandate what an implementer can and cannot do (see my previous comment).
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -

          If that is not desired an implementer can break the chain

          If implementer X breaks the chain, how would implementer Z know that his/her implementation is not broken by someone else ?
          There is no deterministic order in which X and Z's observers are registered.

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - If that is not desired an implementer can break the chain If implementer X breaks the chain, how would implementer Z know that his/her implementation is not broken by someone else ? There is no deterministic order in which X and Z's observers are registered.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Ted, you are trying to redesign the overall coprocessor API.
          I would prefer that we can keep the API as is, and then work on ways to make better sense of the scanners as part of a different jira.

          As Andy said a major raison d'etre for coprocessors is to extend HBase. Nobody would just run a 3rd party coprocessor (unless that party is highly trusted). A coprocessor could call System.exit() and shut down the RegionServer... And that was a design choice.

          Show
          lhofhansl Lars Hofhansl added a comment - Ted, you are trying to redesign the overall coprocessor API. I would prefer that we can keep the API as is, and then work on ways to make better sense of the scanners as part of a different jira. As Andy said a major raison d'etre for coprocessors is to extend HBase. Nobody would just run a 3rd party coprocessor (unless that party is highly trusted). A coprocessor could call System.exit() and shut down the RegionServer... And that was a design choice.
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -

          For preCompactScannerOpen() / preFlushScannerOpen(), they're new APIs and serve particular use case if implemented.
          I feel there is no need to pass InternalScanner as parameter.

          Just my personal observation.

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - For preCompactScannerOpen() / preFlushScannerOpen(), they're new APIs and serve particular use case if implemented. I feel there is no need to pass InternalScanner as parameter. Just my personal observation.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          I see your point.

          But then you

          1. need a mechanism to only allow a single RegionObserver to implement these hooks.
          2. placed an arbitrary limit on the API (only because it is hard to chain the hooks here, does not mean it is impossible or not useful, as I said one can always create a new implementing class of InternalScanner that does the right thing)

          This follows the coprocessor API pattern. I think this should be kept.

          Show
          lhofhansl Lars Hofhansl added a comment - I see your point. But then you need a mechanism to only allow a single RegionObserver to implement these hooks. placed an arbitrary limit on the API (only because it is hard to chain the hooks here, does not mean it is impossible or not useful, as I said one can always create a new implementing class of InternalScanner that does the right thing) This follows the coprocessor API pattern. I think this should be kept.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Latest patch from RB.

          Does this need more discussion, or is this good to go?
          Ted, Andy?

          Show
          lhofhansl Lars Hofhansl added a comment - Latest patch from RB. Does this need more discussion, or is this good to go? Ted, Andy?
          Hide
          apurtell Andrew Purtell added a comment -

          There is no deterministic order in which X and Z's observers are registered.

          Ted, every coprocessor is registered in order of priority by design. There is always a deterministic order of observers. Frankly, this is something basic about CPs you should already understand if providing design comment on CP API.

          Show
          apurtell Andrew Purtell added a comment - There is no deterministic order in which X and Z's observers are registered. Ted, every coprocessor is registered in order of priority by design. There is always a deterministic order of observers. Frankly, this is something basic about CPs you should already understand if providing design comment on CP API.
          Hide
          apurtell Andrew Purtell added a comment -

          @Lars, lgtm.

          Show
          apurtell Andrew Purtell added a comment - @Lars, lgtm.
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -

          From https://blogs.apache.org/hbase/entry/coprocessor_introduction :

          We have not really discussed priority, but it should be reasonably clear how the priority given to a coprocessor affects how it integrates with other coprocessors. When calling out to registered observers, the framework executes their callbacks methods in the sorted order of their priority. Ties are broken arbitrarily.

          This means there still might be scenarios where coprocessors for the same table have the same priority.

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - From https://blogs.apache.org/hbase/entry/coprocessor_introduction : We have not really discussed priority, but it should be reasonably clear how the priority given to a coprocessor affects how it integrates with other coprocessors. When calling out to registered observers, the framework executes their callbacks methods in the sorted order of their priority. Ties are broken arbitrarily. This means there still might be scenarios where coprocessors for the same table have the same priority.
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -

          For RegionObserver.preFlushScannerOpen(), compaction is mentioned in its javadoc several times.

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - For RegionObserver.preFlushScannerOpen(), compaction is mentioned in its javadoc several times.
          Hide
          apurtell Andrew Purtell added a comment -

          This means there still might be scenarios where coprocessors for the same table have the same priority.

          Fine, that text needs update. The tie is not broken arbitrarily, it is by load order.

          But you are missing the larger point that both Lars and I have mentioned above, CPs are not (currently, nor likely) going to be random user modules loaded blindly with respect to each other. They are deeply embedded in HBase implementation. If as a system integrator you are deploying coprocessors, you will be engineering their load/initialization order as well as all other cluster details. Again, this is an X-Y discussion. It would be best to stick to the issues in scope to this JIRA. If there are larger design issues you'd like to consider, let's open a JIRA for those.

          Show
          apurtell Andrew Purtell added a comment - This means there still might be scenarios where coprocessors for the same table have the same priority. Fine, that text needs update. The tie is not broken arbitrarily, it is by load order. But you are missing the larger point that both Lars and I have mentioned above, CPs are not (currently, nor likely) going to be random user modules loaded blindly with respect to each other. They are deeply embedded in HBase implementation. If as a system integrator you are deploying coprocessors, you will be engineering their load/initialization order as well as all other cluster details. Again, this is an X-Y discussion. It would be best to stick to the issues in scope to this JIRA. If there are larger design issues you'd like to consider, let's open a JIRA for those.
          Hide
          zhihyu@ebaysf.com Ted Yu added a comment -

          I don't have further design level comments.

          Show
          zhihyu@ebaysf.com Ted Yu added a comment - I don't have further design level comments.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12538381/6427-v10.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 33 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 6 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.io.hfile.TestForceCacheImportantBlocks

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12538381/6427-v10.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 33 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 6 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.io.hfile.TestForceCacheImportantBlocks Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2457//console This message is automatically generated.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Thanks Ted and Andy, and thanks for the discussion.
          I think we have two (maybe three) take-aways: (1) We need to look at the various scanner interface and see why we have so many diverging interfaces and (2) add more coprocessor documentation (maybe with some more examples) and potentially (3) think generally about what it means to extend HBase and when coprocessors are a good mechanism for that.

          It seems to me that coprocessors are a good solution to effect existing processing at certain (including critical) spots, but maybe not suited to replace the entire logic. In that this issue represents a corner case - which is probably what spawned the longer discussion here (the creation of the StoreScanner is replaced, and this is done in the context of a bigger operation).
          In the future we might be able to use Guice to replace entire subsystems.

          For this issue I'll fix up the Javadoc issues that Ted mentions and commit this to trunk and also make a 0.94 patch.

          Thanks again for the review and the discussion.

          Show
          lhofhansl Lars Hofhansl added a comment - Thanks Ted and Andy, and thanks for the discussion. I think we have two (maybe three) take-aways: (1) We need to look at the various scanner interface and see why we have so many diverging interfaces and (2) add more coprocessor documentation (maybe with some more examples) and potentially (3) think generally about what it means to extend HBase and when coprocessors are a good mechanism for that. It seems to me that coprocessors are a good solution to effect existing processing at certain (including critical) spots, but maybe not suited to replace the entire logic. In that this issue represents a corner case - which is probably what spawned the longer discussion here (the creation of the StoreScanner is replaced, and this is done in the context of a bigger operation). In the future we might be able to use Guice to replace entire subsystems. For this issue I'll fix up the Javadoc issues that Ted mentions and commit this to trunk and also make a 0.94 patch. Thanks again for the review and the discussion.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Committed to trunk.
          Do we need an extra discussion about deprecating the old

          {pre|post}

          Flush hooks in a 0.94 point release?
          In theory we could deprecate them in 0.94 and then remove them in 0.96, but I do not feel strongly about this.

          Show
          lhofhansl Lars Hofhansl added a comment - Committed to trunk. Do we need an extra discussion about deprecating the old {pre|post} Flush hooks in a 0.94 point release? In theory we could deprecate them in 0.94 and then remove them in 0.96, but I do not feel strongly about this.
          Hide
          apurtell Andrew Purtell added a comment -

          We don't want to just yank out CP APIs, but a liberal attitude toward deprecating and removing them is fine IMO. The contract is different than with a public API. Furthermore, the interfaces haven't gelled yet. We're just starting to get interesting use cases.

          Show
          apurtell Andrew Purtell added a comment - We don't want to just yank out CP APIs, but a liberal attitude toward deprecating and removing them is fine IMO. The contract is different than with a public API. Furthermore, the interfaces haven't gelled yet. We're just starting to get interesting use cases.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Fair enough.
          Here's a patch for 0.94. Needed only slight massaging.
          Has the old

          {pre|post}

          Flush deprecated.

          Show
          lhofhansl Lars Hofhansl added a comment - Fair enough. Here's a patch for 0.94. Needed only slight massaging. Has the old {pre|post} Flush deprecated.
          Hide
          hudson Hudson added a comment -

          Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #116 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/116/)
          HBASE-6427 Pluggable compaction and scan policies via coprocessors (Revision 1367361)

          Result = FAILURE
          larsh :
          Files :

          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Compactor.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/NoOpScanPolicyObserver.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
          Show
          hudson Hudson added a comment - Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #116 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/116/ ) HBASE-6427 Pluggable compaction and scan policies via coprocessors (Revision 1367361) Result = FAILURE larsh : Files : /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Compactor.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/NoOpScanPolicyObserver.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
          Hide
          hudson Hudson added a comment -

          Integrated in HBase-TRUNK #3185 (See https://builds.apache.org/job/HBase-TRUNK/3185/)
          HBASE-6427 Pluggable compaction and scan policies via coprocessors (Revision 1367361)

          Result = SUCCESS
          larsh :
          Files :

          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Compactor.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/NoOpScanPolicyObserver.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
          Show
          hudson Hudson added a comment - Integrated in HBase-TRUNK #3185 (See https://builds.apache.org/job/HBase-TRUNK/3185/ ) HBASE-6427 Pluggable compaction and scan policies via coprocessors (Revision 1367361) Result = SUCCESS larsh : Files : /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Compactor.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/NoOpScanPolicyObserver.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Committed to 0.94 as well.

          Show
          lhofhansl Lars Hofhansl added a comment - Committed to 0.94 as well.
          Hide
          hudson Hudson added a comment -

          Integrated in HBase-0.94 #378 (See https://builds.apache.org/job/HBase-0.94/378/)
          HBASE-6427 Pluggable compaction and scan policies via coprocessors (Revision 1367402)

          Result = FAILURE
          larsh :
          Files :

          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/NoOpScanPolicyObserver.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
          Show
          hudson Hudson added a comment - Integrated in HBase-0.94 #378 (See https://builds.apache.org/job/HBase-0.94/378/ ) HBASE-6427 Pluggable compaction and scan policies via coprocessors (Revision 1367402) Result = FAILURE larsh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/NoOpScanPolicyObserver.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
          Hide
          eclark Elliott Clark added a comment -

          An InterfaceAudience annotation slipped in here. It breaks older hadoop versions(HBASE-6141).

          Show
          eclark Elliott Clark added a comment - An InterfaceAudience annotation slipped in here. It breaks older hadoop versions( HBASE-6141 ).
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Oops... Yes. Here's an addendum.

          Show
          lhofhansl Lars Hofhansl added a comment - Oops... Yes. Here's an addendum.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Committed addendum, thanks for watching Elliot.

          Show
          lhofhansl Lars Hofhansl added a comment - Committed addendum, thanks for watching Elliot.
          Hide
          eclark Elliott Clark added a comment -

          Thanks so much.

          Show
          eclark Elliott Clark added a comment - Thanks so much.
          Hide
          hudson Hudson added a comment -

          Integrated in HBase-0.94 #379 (See https://builds.apache.org/job/HBase-0.94/379/)
          HBASE-6427 addendum (Revision 1367770)

          Result = FAILURE
          larsh :
          Files :

          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java
          Show
          hudson Hudson added a comment - Integrated in HBase-0.94 #379 (See https://builds.apache.org/job/HBase-0.94/379/ ) HBASE-6427 addendum (Revision 1367770) Result = FAILURE larsh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java
          Hide
          lhofhansl Lars Hofhansl added a comment -

          I ran TestReplication locally and it passed fine. Sigh.

          Show
          lhofhansl Lars Hofhansl added a comment - I ran TestReplication locally and it passed fine. Sigh.
          Hide
          hudson Hudson added a comment -

          Integrated in HBase-0.94-security-on-Hadoop-23 #6 (See https://builds.apache.org/job/HBase-0.94-security-on-Hadoop-23/6/)
          HBASE-6427 addendum (Revision 1367770)
          HBASE-6427 Pluggable compaction and scan policies via coprocessors (Revision 1367402)

          Result = FAILURE
          larsh :
          Files :

          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java

          larsh :
          Files :

          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/NoOpScanPolicyObserver.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
          Show
          hudson Hudson added a comment - Integrated in HBase-0.94-security-on-Hadoop-23 #6 (See https://builds.apache.org/job/HBase-0.94-security-on-Hadoop-23/6/ ) HBASE-6427 addendum (Revision 1367770) HBASE-6427 Pluggable compaction and scan policies via coprocessors (Revision 1367402) Result = FAILURE larsh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java larsh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/NoOpScanPolicyObserver.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
          Hide
          hudson Hudson added a comment -

          Integrated in HBase-0.94-security #47 (See https://builds.apache.org/job/HBase-0.94-security/47/)
          HBASE-6427 addendum (Revision 1367770)
          HBASE-6427 Pluggable compaction and scan policies via coprocessors (Revision 1367402)

          Result = SUCCESS
          larsh :
          Files :

          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java

          larsh :
          Files :

          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/NoOpScanPolicyObserver.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
          Show
          hudson Hudson added a comment - Integrated in HBase-0.94-security #47 (See https://builds.apache.org/job/HBase-0.94-security/47/ ) HBASE-6427 addendum (Revision 1367770) HBASE-6427 Pluggable compaction and scan policies via coprocessors (Revision 1367402) Result = SUCCESS larsh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java larsh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanType.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/NoOpScanPolicyObserver.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
          Hide
          yuzhihong@gmail.com Ted Yu added a comment -

          May I ask why BaseRegionObserver, a class marked with @InterfaceAudience.Public, exposes KeyValueScanner and InternalScanner which are marked @InterfaceAudience.Private ?

          Thanks

          Show
          yuzhihong@gmail.com Ted Yu added a comment - May I ask why BaseRegionObserver, a class marked with @InterfaceAudience.Public, exposes KeyValueScanner and InternalScanner which are marked @InterfaceAudience.Private ? Thanks
          Hide
          lhofhansl Lars Hofhansl added a comment -

          Not sure. BaseRegionObserver is public, whereas KeyValueScanner and InternalScanner are not. The fact that they are in the same class file does not matter for the annotation. Right?

          Show
          lhofhansl Lars Hofhansl added a comment - Not sure. BaseRegionObserver is public, whereas KeyValueScanner and InternalScanner are not. The fact that they are in the same class file does not matter for the annotation. Right?
          Hide
          yuzhihong@gmail.com Ted Yu added a comment -

          In the new APIs, some return null as InternalScanner:

          +  public InternalScanner preFlushScannerOpen(final ObserverContext<RegionCoprocessorEnvironment> c,
          +      final Store store, final KeyValueScanner memstoreScanner, final InternalScanner s)
          +      throws IOException {
          +    return null;
          

          some return the passed in scanner:

          +  public InternalScanner preFlush(ObserverContext<RegionCoprocessorEnvironment> e, Store store,
          +      InternalScanner scanner) throws IOException {
          +    return scanner;
          

          I wonder why the difference.

          This feature isn't marked as incompatible feature.
          If 0.94.1 user has some jar with custom BaseRegionObserver implementation, I wonder if he/she needs to recompile his/her code to generate new jar.

          Show
          yuzhihong@gmail.com Ted Yu added a comment - In the new APIs, some return null as InternalScanner: + public InternalScanner preFlushScannerOpen( final ObserverContext<RegionCoprocessorEnvironment> c, + final Store store, final KeyValueScanner memstoreScanner, final InternalScanner s) + throws IOException { + return null ; some return the passed in scanner: + public InternalScanner preFlush(ObserverContext<RegionCoprocessorEnvironment> e, Store store, + InternalScanner scanner) throws IOException { + return scanner; I wonder why the difference. This feature isn't marked as incompatible feature. If 0.94.1 user has some jar with custom BaseRegionObserver implementation, I wonder if he/she needs to recompile his/her code to generate new jar.
          Hide
          lhofhansl Lars Hofhansl added a comment -

          We said before in various discussions that the coprocessor API is not necessarily stable, yet.
          If you think this discussion warrants it, can you file a separate jira, please.

          Show
          lhofhansl Lars Hofhansl added a comment - We said before in various discussions that the coprocessor API is not necessarily stable, yet. If you think this discussion warrants it, can you file a separate jira, please.
          Hide
          stack stack added a comment -

          Fix up after bulk move overwrote some 0.94.2 fix versions w/ 0.95.0 (Noticed by Lars Hofhansl)

          Show
          stack stack added a comment - Fix up after bulk move overwrote some 0.94.2 fix versions w/ 0.95.0 (Noticed by Lars Hofhansl)

            People

            • Assignee:
              lhofhansl Lars Hofhansl
              Reporter:
              lhofhansl Lars Hofhansl
            • Votes:
              0 Vote for this issue
              Watchers:
              15 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development