HBase
  1. HBase
  2. HBASE-9343

Implement stateless scanner for Stargate

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 0.94.11
    • Fix Version/s: 0.98.0, 0.99.0
    • Component/s: REST
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Tags:
      scanner, rest

      Description

      The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios. The current JIRA proposes to implement a stateless scanner. In the first version of the patch, a new resource class "ScanResource" has been added and all the scan parameters will be specified as query params.

      The following are the scan parameters

      startrow - The start row for the scan.

      endrow - The end row for the scan.

      columns - The columns to scan.

      starttime, endtime - To only retrieve columns within a specific range of version timestamps,both start and end time must be specified.

      maxversions - To limit the number of versions of each column to be returned.

      batchsize - To limit the maximum number of values returned for each call to next().

      limit - The number of rows to return in the scan operation.

      More on start row, end row and limit parameters.
      1. If start row, end row and limit not specified, then the whole table will be scanned.
      2. If start row and limit (say N) is specified, then the scan operation will return N rows from the start row specified.
      3. If only limit parameter is specified, then the scan operation will return N rows from the start of the table.
      4. If limit and end row are specified, then the scan operation will return N rows from start of table till the end row. If the end row is
      reached before N rows ( say M and M < N ), then M rows will be returned to the user.
      5. If start row, end row and limit (say N ) are specified and N < number of rows between start row and end row, then N rows from start row
      will be returned to the user. If N > (number of rows between start row and end row (say M), then M number of rows will be returned to the
      user.

      1. HBASE-9343_trunk.05.patch
        52 kB
        Vandana Ayyalasomayajula
      2. HBASE-9343_trunk.04.patch
        50 kB
        Vandana Ayyalasomayajula
      3. HBASE-9343_trunk.03.patch
        50 kB
        Vandana Ayyalasomayajula
      4. HBASE-9343_trunk.02.patch
        49 kB
        Vandana Ayyalasomayajula
      5. HBASE-9343_trunk.01.patch
        48 kB
        Nick Dimiduk
      6. HBASE-9343_trunk.01.patch
        48 kB
        Vandana Ayyalasomayajula
      7. HBASE-9343_trunk.00.patch
        46 kB
        Vandana Ayyalasomayajula
      8. HBASE-9343_94.01.patch
        53 kB
        Vandana Ayyalasomayajula
      9. HBASE-9343_94.00.patch
        53 kB
        Vandana Ayyalasomayajula

        Issue Links

          Activity

          Hide
          Vandana Ayyalasomayajula added a comment -

          First draft of the patch.

          Show
          Vandana Ayyalasomayajula added a comment - First draft of the patch.
          Hide
          Vandana Ayyalasomayajula added a comment -

          Review board request link: https://reviews.apache.org/r/13836/

          Show
          Vandana Ayyalasomayajula added a comment - Review board request link: https://reviews.apache.org/r/13836/
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12600031/HBASE-9343_94.00.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 13 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6911//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600031/HBASE-9343_94.00.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 13 new or modified tests. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6911//console This message is automatically generated.
          Hide
          Andrew Purtell added a comment - - edited

          The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios

          That's not quite how I would describe it. The current scanner implementation expects clients to restart scans if there is a REST server failure in the midst. The tradeoff is a pretty close semantic mapping - though definitely not RESTful - to the client API on the one hand, and loss of the cursor upon process failure on the other. Sure, that can be problematic.

          Why introduce new resources and a new model of scanning? Most of what you are trying to do can be done with Gets. Extend the existing resources for that.

          Do we need ProtobufStreamingUtil if REST already has internally a Generator API for iterating over results returned by scanners? Did you partially reimplement that here? What about XML or JSON?

          I am -0 on the changes as is.

          Show
          Andrew Purtell added a comment - - edited The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios That's not quite how I would describe it. The current scanner implementation expects clients to restart scans if there is a REST server failure in the midst. The tradeoff is a pretty close semantic mapping - though definitely not RESTful - to the client API on the one hand, and loss of the cursor upon process failure on the other. Sure, that can be problematic. Why introduce new resources and a new model of scanning? Most of what you are trying to do can be done with Gets. Extend the existing resources for that. Do we need ProtobufStreamingUtil if REST already has internally a Generator API for iterating over results returned by scanners? Did you partially reimplement that here? What about XML or JSON? I am -0 on the changes as is.
          Hide
          Andrew Purtell added a comment -

          I considered briefly once keeping the REST scan cursor state in ZooKeeper for transparent failover of scans upon REST process failure. This would not have the same scalability as native scanners on account of ZooKeeper operation throughput limits but could surely support on the order of 100s of concurrent scanners open on a REST farm. Clients that need scanner failover would have it without API changes, though they would need to handle possible HTTP redirects. Expectation would be the majority of clients could live with loss of the cursor upon REST process failure though.

          No need to do it this way, just providing a historical note.

          Show
          Andrew Purtell added a comment - I considered briefly once keeping the REST scan cursor state in ZooKeeper for transparent failover of scans upon REST process failure. This would not have the same scalability as native scanners on account of ZooKeeper operation throughput limits but could surely support on the order of 100s of concurrent scanners open on a REST farm. Clients that need scanner failover would have it without API changes, though they would need to handle possible HTTP redirects. Expectation would be the majority of clients could live with loss of the cursor upon REST process failure though. No need to do it this way, just providing a historical note.
          Hide
          Francis Liu added a comment -

          Andrew Purtell Just to clarify there's two big motivating pieces to create a new scanner resource:

          1. make REST server stateless, let's keep all state in hbase and have the server merely function as a proxy this makes the system much simpler to scale and manage.
          2. stream the data instead of issuing new http request for each batch, this solves #1 as well as makes scans more performant (less rpc calls, delegating flow control at tcp layer). This also eases the pressure of having to keep a lot of scan data in memory in order to make it performant.

          This patch should include support for json, xml and protobuf.

          Show
          Francis Liu added a comment - Andrew Purtell Just to clarify there's two big motivating pieces to create a new scanner resource: 1. make REST server stateless, let's keep all state in hbase and have the server merely function as a proxy this makes the system much simpler to scale and manage. 2. stream the data instead of issuing new http request for each batch, this solves #1 as well as makes scans more performant (less rpc calls, delegating flow control at tcp layer). This also eases the pressure of having to keep a lot of scan data in memory in order to make it performant. This patch should include support for json, xml and protobuf.
          Hide
          Vandana Ayyalasomayajula added a comment -

          Currently if a user has to scan a table, they have to
          1. Use PUT/POST with -d option and specify a serialized scanner model object if scan params are needed.
          2. Then issue GET command (s).
          3. Finally use DELETE to delete the scanner.

          But semantically it should only be a GET call, so the above patch has all the scan parameters as query parameters, which makes it easy to use.
          The ProtobufStreamingUtil class sends records to user in frames instead of one record in each GET call. The existing Generator API uses RowSpec to fetch
          the records, which the new scan resource does not use. So that has not been used.

          Show
          Vandana Ayyalasomayajula added a comment - Currently if a user has to scan a table, they have to 1. Use PUT/POST with -d option and specify a serialized scanner model object if scan params are needed. 2. Then issue GET command (s). 3. Finally use DELETE to delete the scanner. But semantically it should only be a GET call, so the above patch has all the scan parameters as query parameters, which makes it easy to use. The ProtobufStreamingUtil class sends records to user in frames instead of one record in each GET call. The existing Generator API uses RowSpec to fetch the records, which the new scan resource does not use. So that has not been used.
          Hide
          Andrew Purtell added a comment -

          Set aside the old scanner stuff for a moment.

          I agree the changes to use streaming are good. Can this be done using the existing resource types and the new query parameters you are introducing instead of also introducing ScanResource and

          {table}

          /scan ?

          Show
          Andrew Purtell added a comment - Set aside the old scanner stuff for a moment. I agree the changes to use streaming are good. Can this be done using the existing resource types and the new query parameters you are introducing instead of also introducing ScanResource and {table} /scan ?
          Hide
          Francis Liu added a comment -

          I think there's two other options:

          1. GET /table
          2. GET /table/scanner

          #2 is not good since it just convolutes the resource. #1 is more intuitive tho it might be accidentally invoked when users are constructing/playing with URIs.

          Show
          Francis Liu added a comment - I think there's two other options: 1. GET /table 2. GET /table/scanner #2 is not good since it just convolutes the resource. #1 is more intuitive tho it might be accidentally invoked when users are constructing/playing with URIs.
          Hide
          Andrew Purtell added a comment -

          #1 works if you are ok with that. Then there's a new way to manage scanning with the approach in this issue, or backwards compatible URL constructions aka RowSpec for the old behavior. Let's see how it goes. Perhaps we adopt this new way (e.g. GET on /table with query parameters) as better, since it streams, and deprecate the old RowSpec way of addressing cells.

          Show
          Andrew Purtell added a comment - #1 works if you are ok with that. Then there's a new way to manage scanning with the approach in this issue, or backwards compatible URL constructions aka RowSpec for the old behavior. Let's see how it goes. Perhaps we adopt this new way (e.g. GET on /table with query parameters) as better, since it streams, and deprecate the old RowSpec way of addressing cells.
          Hide
          Vandana Ayyalasomayajula added a comment -

          The patch contains changes to scan a table using GET /table with scan specification as query parameters.

          Show
          Vandana Ayyalasomayajula added a comment - The patch contains changes to scan a table using GET /table with scan specification as query parameters.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12601292/HBASE-9364.01.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7019//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12601292/HBASE-9364.01.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7019//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12601871/HBASE-9343_94.01.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 16 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7073//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12601871/HBASE-9343_94.01.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 16 new or modified tests. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7073//console This message is automatically generated.
          Hide
          Vandana Ayyalasomayajula added a comment -

          Patch for trunk.

          Show
          Vandana Ayyalasomayajula added a comment - Patch for trunk.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12601923/HBASE-9343_trunk.00.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 16 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          -1 hadoop2.0. The patch failed to compile against the hadoop 2.0 profile.

          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7077//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12601923/HBASE-9343_trunk.00.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 16 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. -1 hadoop2.0 . The patch failed to compile against the hadoop 2.0 profile. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7077//console This message is automatically generated.
          Hide
          Nick Dimiduk added a comment -

          Please verify your changes vs. Hadoop2 profile:

          $ mvn -Dhadoop.profile=2.0 clean compile
          ...
          [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) on project hbase-hadoop2-compat: Compilation failure
          [ERROR] /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java:[30,7] org.apache.hadoop.hbase.rest.MetricsRESTSourceImpl is not abstract and does not override abstract method incrementFailedScanRequests(int) in org.apache.hadoop.hbase.rest.MetricsRESTSource
          
          Show
          Nick Dimiduk added a comment - Please verify your changes vs. Hadoop2 profile: $ mvn -Dhadoop.profile=2.0 clean compile ... [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) on project hbase-hadoop2-compat: Compilation failure [ERROR] /Users/ndimiduk/repos/hbase/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java:[30,7] org.apache.hadoop.hbase.rest.MetricsRESTSourceImpl is not abstract and does not override abstract method incrementFailedScanRequests(int) in org.apache.hadoop.hbase.rest.MetricsRESTSource
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12601953/HBASE-9343_trunk.01.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 16 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          -1 core tests. The patch failed these unit tests:

          -1 core zombie tests. There are 1 zombie test(s):

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12601953/HBASE-9343_trunk.01.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 16 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: -1 core zombie tests . There are 1 zombie test(s): Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7081//console This message is automatically generated.
          Hide
          Nick Dimiduk added a comment -

          poking jenkins.

          Show
          Nick Dimiduk added a comment - poking jenkins.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12602157/HBASE-9343_trunk.01.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 16 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          -1 core tests. The patch failed these unit tests:

          -1 core zombie tests. There are 1 zombie test(s):

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602157/HBASE-9343_trunk.01.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 16 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: -1 core zombie tests . There are 1 zombie test(s): Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7092//console This message is automatically generated.
          Hide
          Nick Dimiduk added a comment -

          Hmm. Looks like test TestResourceFilter is flakey? It ran in build #7090 but not #7089.

          I'm in favor of adopting the streaming model going forward. The patch structure looks good to me as well. Andrew Purtell how do you recommend Vandana Ayyalasomayajula processed regarding the API changes and deprecating the existing interface?

          Show
          Nick Dimiduk added a comment - Hmm. Looks like test TestResourceFilter is flakey? It ran in build #7090 but not #7089 . I'm in favor of adopting the streaming model going forward. The patch structure looks good to me as well. Andrew Purtell how do you recommend Vandana Ayyalasomayajula processed regarding the API changes and deprecating the existing interface?
          Hide
          Vandana Ayyalasomayajula added a comment -

          Nick Dimiduk TestResourceFilter is not checked in yet. Its part of HBASE-9347.

          Show
          Vandana Ayyalasomayajula added a comment - Nick Dimiduk TestResourceFilter is not checked in yet. Its part of HBASE-9347 .
          Hide
          Nick Dimiduk added a comment -
          Show
          Nick Dimiduk added a comment - Vandana Ayyalasomayajula so it is
          Hide
          Vandana Ayyalasomayajula added a comment -

          rebased on open source

          Show
          Vandana Ayyalasomayajula added a comment - rebased on open source
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12602864/HBASE-9343_trunk.02.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 16 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.coprocessor.TestMasterObserver
          org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol
          org.apache.hadoop.hbase.mapred.TestTableInputFormat
          org.apache.hadoop.hbase.mapreduce.TestTimeRangeMapRed
          org.apache.hadoop.hbase.mapreduce.TestRowCounter
          org.apache.hadoop.hbase.io.encoding.TestChangingEncoding
          org.apache.hadoop.hbase.client.TestHTableUtil
          org.apache.hadoop.hbase.mapreduce.TestImportTsv
          org.apache.hadoop.hbase.coprocessor.TestOpenTableInCoprocessor
          org.apache.hadoop.hbase.coprocessor.TestClassLoading
          org.apache.hadoop.hbase.thrift.TestThriftServer
          org.apache.hadoop.hbase.master.cleaner.TestSnapshotFromMaster
          org.apache.hadoop.hbase.trace.TestHTraceHooks
          org.apache.hadoop.hbase.mapreduce.TestCopyTable
          org.apache.hadoop.hbase.mapreduce.TestImportExport
          org.apache.hadoop.hbase.client.TestHTablePool$TestHTableThreadLocalPool
          org.apache.hadoop.hbase.util.TestMergeTool
          org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles
          org.apache.hadoop.hbase.security.access.TestTablePermissions
          org.apache.hadoop.hbase.snapshot.TestExportSnapshot
          org.apache.hadoop.hbase.TestZooKeeper
          org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithRemove
          org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient
          org.apache.hadoop.hbase.security.access.TestZKPermissionsWatcher
          org.apache.hadoop.hbase.client.TestClientTimeouts
          org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence
          org.apache.hadoop.hbase.master.TestMasterFailoverBalancerPersistence
          org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan2
          org.apache.hadoop.hbase.client.TestFromClientSideNoCodec
          org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildOverlap
          org.apache.hadoop.hbase.client.TestMultiParallel
          org.apache.hadoop.hbase.mapred.TestTableMapReduce
          org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildBase
          org.apache.hadoop.hbase.security.access.TestAccessControlFilter
          org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort
          org.apache.hadoop.hbase.thrift.TestThriftServerCmdLine
          org.apache.hadoop.hbase.regionserver.TestHRegion
          org.apache.hadoop.hbase.client.TestTimestampsFilter
          org.apache.hadoop.hbase.util.TestRegionSplitter
          org.apache.hadoop.hbase.catalog.TestMetaMigrationConvertingToPB
          org.apache.hadoop.hbase.client.TestMetaScanner
          org.apache.hadoop.hbase.master.snapshot.TestSnapshotFileCache
          org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass
          org.apache.hadoop.hbase.client.TestAdmin
          org.apache.hadoop.hbase.client.TestMultipleTimestamps
          org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster
          org.apache.hadoop.hbase.master.handler.TestCreateTableHandler
          org.apache.hadoop.hbase.master.TestMasterMetricsWrapper
          org.apache.hadoop.hbase.master.TestMasterRestartAfterDisablingTable
          org.apache.hadoop.hbase.TestAcidGuarantees
          org.apache.hadoop.hbase.master.TestRollingRestart
          org.apache.hadoop.hbase.regionserver.TestHRegionOnCluster
          org.apache.hadoop.hbase.TestFullLogReconstruction
          org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook
          org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient
          org.apache.hadoop.hbase.coprocessor.TestBigDecimalColumnInterpreter
          org.apache.hadoop.hbase.mapreduce.TestTableMapReduce
          org.apache.hadoop.hbase.mapreduce.TestWALPlayer
          org.apache.hadoop.hbase.client.TestScannersFromClientSide
          org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithRemove
          org.apache.hadoop.hbase.mapreduce.TestCellCounter
          org.apache.hadoop.hbase.TestIOFencing
          org.apache.hadoop.hbase.mapreduce.TestHLogRecordReader
          org.apache.hadoop.hbase.master.TestMasterTransitions
          org.apache.hadoop.hbase.client.TestScannerTimeout
          org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout
          org.apache.hadoop.hbase.util.TestMergeTable
          org.apache.hadoop.hbase.regionserver.TestServerCustomProtocol
          org.apache.hadoop.hbase.client.TestShell
          org.apache.hadoop.hbase.master.TestRestartCluster
          org.apache.hadoop.hbase.mapreduce.TestMultithreadedTableMapper
          org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFilesSplitRecovery
          org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithAbort
          org.apache.hadoop.hbase.util.TestMiniClusterLoadParallel
          org.apache.hadoop.hbase.client.TestSnapshotMetadata
          org.apache.hadoop.hbase.client.TestHTablePool$TestHTableReusablePool
          org.apache.hadoop.hbase.TestDrainingServer
          org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential
          org.apache.hadoop.hbase.master.TestMasterFileSystem
          org.apache.hadoop.hbase.master.TestZKBasedOpenCloseRegion
          org.apache.hadoop.hbase.zookeeper.TestZooKeeperACL
          org.apache.hadoop.hbase.util.TestCoprocessorScanPolicy
          org.apache.hadoop.hbase.master.TestOpenedRegionHandler
          org.apache.hadoop.hbase.io.TestFileLink
          org.apache.hadoop.hbase.master.TestMasterMetrics
          org.apache.hadoop.hbase.client.TestHTableMultiplexer
          org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles
          org.apache.hadoop.hbase.master.TestMasterFailover
          org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery
          org.apache.hadoop.hbase.backup.TestHFileArchiving
          org.apache.hadoop.hbase.master.TestTableLockManager
          org.apache.hadoop.hbase.master.handler.TestTableDescriptorModification
          org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint
          org.apache.hadoop.hbase.mapreduce.TestHRegionPartitioner
          org.apache.hadoop.hbase.client.TestHCM
          org.apache.hadoop.hbase.master.TestMasterShutdown
          org.apache.hadoop.hbase.client.TestSnapshotFromClient
          org.apache.hadoop.hbase.coprocessor.TestWALObserver
          org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient
          org.apache.hadoop.hbase.client.TestFromClientSide
          org.apache.hadoop.hbase.util.TestMiniClusterLoadEncoded
          org.apache.hadoop.hbase.master.TestRegionPlacement
          org.apache.hadoop.hbase.client.TestFromClientSide3
          org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor
          org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan1
          org.apache.hadoop.hbase.security.access.TestAccessController
          org.apache.hadoop.hbase.TestLocalHBaseCluster
          org.apache.hadoop.hbase.catalog.TestMetaReaderEditor
          org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient
          org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint
          org.apache.hadoop.hbase.master.TestDistributedLogSplitting
          org.apache.hadoop.hbase.util.TestFSUtils
          org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildHole
          org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat
          org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface
          org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster
          org.apache.hadoop.hbase.master.cleaner.TestHFileCleaner
          org.apache.hadoop.hbase.master.TestMaster
          org.apache.hadoop.hbase.io.encoding.TestLoadAndSwitchEncodeOnDisk
          org.apache.hadoop.hbase.regionserver.wal.TestLogRolling
          org.apache.hadoop.hbase.util.TestHBaseFsck
          org.apache.hadoop.hbase.regionserver.TestClusterId

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12602864/HBASE-9343_trunk.02.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 16 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.coprocessor.TestMasterObserver org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol org.apache.hadoop.hbase.mapred.TestTableInputFormat org.apache.hadoop.hbase.mapreduce.TestTimeRangeMapRed org.apache.hadoop.hbase.mapreduce.TestRowCounter org.apache.hadoop.hbase.io.encoding.TestChangingEncoding org.apache.hadoop.hbase.client.TestHTableUtil org.apache.hadoop.hbase.mapreduce.TestImportTsv org.apache.hadoop.hbase.coprocessor.TestOpenTableInCoprocessor org.apache.hadoop.hbase.coprocessor.TestClassLoading org.apache.hadoop.hbase.thrift.TestThriftServer org.apache.hadoop.hbase.master.cleaner.TestSnapshotFromMaster org.apache.hadoop.hbase.trace.TestHTraceHooks org.apache.hadoop.hbase.mapreduce.TestCopyTable org.apache.hadoop.hbase.mapreduce.TestImportExport org.apache.hadoop.hbase.client.TestHTablePool$TestHTableThreadLocalPool org.apache.hadoop.hbase.util.TestMergeTool org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles org.apache.hadoop.hbase.security.access.TestTablePermissions org.apache.hadoop.hbase.snapshot.TestExportSnapshot org.apache.hadoop.hbase.TestZooKeeper org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithRemove org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient org.apache.hadoop.hbase.security.access.TestZKPermissionsWatcher org.apache.hadoop.hbase.client.TestClientTimeouts org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence org.apache.hadoop.hbase.master.TestMasterFailoverBalancerPersistence org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan2 org.apache.hadoop.hbase.client.TestFromClientSideNoCodec org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildOverlap org.apache.hadoop.hbase.client.TestMultiParallel org.apache.hadoop.hbase.mapred.TestTableMapReduce org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildBase org.apache.hadoop.hbase.security.access.TestAccessControlFilter org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort org.apache.hadoop.hbase.thrift.TestThriftServerCmdLine org.apache.hadoop.hbase.regionserver.TestHRegion org.apache.hadoop.hbase.client.TestTimestampsFilter org.apache.hadoop.hbase.util.TestRegionSplitter org.apache.hadoop.hbase.catalog.TestMetaMigrationConvertingToPB org.apache.hadoop.hbase.client.TestMetaScanner org.apache.hadoop.hbase.master.snapshot.TestSnapshotFileCache org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass org.apache.hadoop.hbase.client.TestAdmin org.apache.hadoop.hbase.client.TestMultipleTimestamps org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster org.apache.hadoop.hbase.master.handler.TestCreateTableHandler org.apache.hadoop.hbase.master.TestMasterMetricsWrapper org.apache.hadoop.hbase.master.TestMasterRestartAfterDisablingTable org.apache.hadoop.hbase.TestAcidGuarantees org.apache.hadoop.hbase.master.TestRollingRestart org.apache.hadoop.hbase.regionserver.TestHRegionOnCluster org.apache.hadoop.hbase.TestFullLogReconstruction org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient org.apache.hadoop.hbase.coprocessor.TestBigDecimalColumnInterpreter org.apache.hadoop.hbase.mapreduce.TestTableMapReduce org.apache.hadoop.hbase.mapreduce.TestWALPlayer org.apache.hadoop.hbase.client.TestScannersFromClientSide org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithRemove org.apache.hadoop.hbase.mapreduce.TestCellCounter org.apache.hadoop.hbase.TestIOFencing org.apache.hadoop.hbase.mapreduce.TestHLogRecordReader org.apache.hadoop.hbase.master.TestMasterTransitions org.apache.hadoop.hbase.client.TestScannerTimeout org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout org.apache.hadoop.hbase.util.TestMergeTable org.apache.hadoop.hbase.regionserver.TestServerCustomProtocol org.apache.hadoop.hbase.client.TestShell org.apache.hadoop.hbase.master.TestRestartCluster org.apache.hadoop.hbase.mapreduce.TestMultithreadedTableMapper org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFilesSplitRecovery org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithAbort org.apache.hadoop.hbase.util.TestMiniClusterLoadParallel org.apache.hadoop.hbase.client.TestSnapshotMetadata org.apache.hadoop.hbase.client.TestHTablePool$TestHTableReusablePool org.apache.hadoop.hbase.TestDrainingServer org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential org.apache.hadoop.hbase.master.TestMasterFileSystem org.apache.hadoop.hbase.master.TestZKBasedOpenCloseRegion org.apache.hadoop.hbase.zookeeper.TestZooKeeperACL org.apache.hadoop.hbase.util.TestCoprocessorScanPolicy org.apache.hadoop.hbase.master.TestOpenedRegionHandler org.apache.hadoop.hbase.io.TestFileLink org.apache.hadoop.hbase.master.TestMasterMetrics org.apache.hadoop.hbase.client.TestHTableMultiplexer org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles org.apache.hadoop.hbase.master.TestMasterFailover org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery org.apache.hadoop.hbase.backup.TestHFileArchiving org.apache.hadoop.hbase.master.TestTableLockManager org.apache.hadoop.hbase.master.handler.TestTableDescriptorModification org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint org.apache.hadoop.hbase.mapreduce.TestHRegionPartitioner org.apache.hadoop.hbase.client.TestHCM org.apache.hadoop.hbase.master.TestMasterShutdown org.apache.hadoop.hbase.client.TestSnapshotFromClient org.apache.hadoop.hbase.coprocessor.TestWALObserver org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient org.apache.hadoop.hbase.client.TestFromClientSide org.apache.hadoop.hbase.util.TestMiniClusterLoadEncoded org.apache.hadoop.hbase.master.TestRegionPlacement org.apache.hadoop.hbase.client.TestFromClientSide3 org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan1 org.apache.hadoop.hbase.security.access.TestAccessController org.apache.hadoop.hbase.TestLocalHBaseCluster org.apache.hadoop.hbase.catalog.TestMetaReaderEditor org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint org.apache.hadoop.hbase.master.TestDistributedLogSplitting org.apache.hadoop.hbase.util.TestFSUtils org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildHole org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster org.apache.hadoop.hbase.master.cleaner.TestHFileCleaner org.apache.hadoop.hbase.master.TestMaster org.apache.hadoop.hbase.io.encoding.TestLoadAndSwitchEncodeOnDisk org.apache.hadoop.hbase.regionserver.wal.TestLogRolling org.apache.hadoop.hbase.util.TestHBaseFsck org.apache.hadoop.hbase.regionserver.TestClusterId Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7195//console This message is automatically generated.
          Hide
          Nick Dimiduk added a comment -

          I'm targeting 0.96.0 and trunk for this patch once questions are resolved. stack please push out if you feel differently.

          Show
          Nick Dimiduk added a comment - I'm targeting 0.96.0 and trunk for this patch once questions are resolved. stack please push out if you feel differently.
          Hide
          Nick Dimiduk added a comment -

          This is more of a design review/comment than specific to you patch. Let me know what you think.

          I'm a fan of rolling out a streaming API for accessing CellSets. However, I think patch v2 is adding to existing confusion. From a data model perspective, hbase starts at the top with tables (well, namespaces now, but ignore that), followed by rows. For someone exploring the API, getting a listing of top-level entities makes sense (it would be nice to also get basic cluster info here, but that's a separate issue):

          $ curl ... http://localhost:8080/ ; echo
          {"table":[{"name":"foo"}]}
          

          The next logical step would be to get information about the table (ie, the schema) using GET /<table>. Instead, they have to use GET /<table>/schema. Let's set this aside a minute, let me come back to it.

          After that, GET /<table>/<rowkey> works as expected (though you're excluded from requesting the 'schema' rowkey).

          $ curl ... http://localhost:8080/foo/r1 ; echo
          {"Row":[{"key":"cjE=","Cell":[{"column":"ZjE6","timestamp":1379113061705,"$":"ZW1wdHkh"},{"column":"ZjE6YmFy","timestamp":1379113067612,"$":"YmF6"}]}]}
          

          According to the HBase data model, I think this makes good sense.

          You can also perform simple prefix-filtered scans using the magical "*" (glob) character (again, excluding people from requesting the '*' rowkey).

          $ curl ... http://localhost:8080/foo/r* ; echo
          {"Row":[{"key":"cjE=","Cell":[{"column":"ZjE6","timestamp":1379113061705,"$":"ZW1wdHkh"},{"column":"ZjE6YmFy","timestamp":1379113067612,"$":"YmF6"}]}]}
          

          Nicely self-consistent, GET /<table>/* returns a full table scan.

          $ curl ... http://localhost:8080/foo/* ; echo
          {"Row":[{"key":"cjE=","Cell":[{"column":"ZjE6","timestamp":1379113061705,"$":"ZW1wdHkh"},{"column":"ZjE6YmFy","timestamp":1379113067612,"$":"YmF6"}]},{"key":"c2NoZW1h","Cell":[{"column":"ZjE6Zm9v","timestamp":1379114118517,"$":"ZG9lcyB0aGlzIHdvcms/"}]}]}

          This patch introduces GET /<table> not as table resource info, but as a way to list rows. Per my earlier comment, I think this should be reserved for table info.

          Does it make sense to instead roll this new streaming scanner stuff into the GET /<table>/* functionality? '*' is special anyway, so why not extend it with these scanner creation query parameters? That way, we can move to an API that behaves like:

          GET / => table list (and maybe cluster info?)
          GET /<table> => table info (existing /<table>/schema)
          GET /<table>/<rowkey> => existing behavior (+ your new streaming hotness?)
          GET /<table>/<optional_prefix>* => existing behavior (+ your new streaming hotness!)
          GET /<table>/<optional_prefix>*?<filter_args...> => all your new streaming hotness plus implied rowkey prefix filter.
          

          I think this starts to look like a more idiomatic rest API. What do you guys think?

          (We should also figure out and document how a user retrieves their precious data hidden behind the rowkeys '*', 'schema', &c.)

          Show
          Nick Dimiduk added a comment - This is more of a design review/comment than specific to you patch. Let me know what you think. I'm a fan of rolling out a streaming API for accessing CellSets. However, I think patch v2 is adding to existing confusion. From a data model perspective, hbase starts at the top with tables (well, namespaces now, but ignore that), followed by rows. For someone exploring the API, getting a listing of top-level entities makes sense (it would be nice to also get basic cluster info here, but that's a separate issue): $ curl ... http://localhost:8080/ ; echo {"table":[{"name":"foo"}]} The next logical step would be to get information about the table (ie, the schema) using GET /<table> . Instead, they have to use GET /<table>/schema . Let's set this aside a minute, let me come back to it. After that, GET /<table>/<rowkey> works as expected (though you're excluded from requesting the 'schema' rowkey). $ curl ... http://localhost:8080/foo/r1 ; echo {"Row":[{"key":"cjE=","Cell":[{"column":"ZjE6","timestamp":1379113061705,"$":"ZW1wdHkh"},{"column":"ZjE6YmFy","timestamp":1379113067612,"$":"YmF6"}]}]} According to the HBase data model, I think this makes good sense. You can also perform simple prefix-filtered scans using the magical "*" (glob) character (again, excluding people from requesting the '*' rowkey). $ curl ... http://localhost:8080/foo/r* ; echo {"Row":[{"key":"cjE=","Cell":[{"column":"ZjE6","timestamp":1379113061705,"$":"ZW1wdHkh"},{"column":"ZjE6YmFy","timestamp":1379113067612,"$":"YmF6"}]}]} Nicely self-consistent, GET /<table>/* returns a full table scan. $ curl ... http://localhost:8080/foo/* ; echo {"Row":[{"key":"cjE=","Cell":[{"column":"ZjE6","timestamp":1379113061705,"$":"ZW1wdHkh"},{"column":"ZjE6YmFy","timestamp":1379113067612,"$":"YmF6"}]},{"key":"c2NoZW1h","Cell":[{"column":"ZjE6Zm9v","timestamp":1379114118517,"$":"ZG9lcyB0aGlzIHdvcms/"}]}]} This patch introduces GET /<table> not as table resource info, but as a way to list rows. Per my earlier comment, I think this should be reserved for table info. Does it make sense to instead roll this new streaming scanner stuff into the GET /<table>/* functionality? '*' is special anyway, so why not extend it with these scanner creation query parameters? That way, we can move to an API that behaves like: GET / => table list (and maybe cluster info?) GET /<table> => table info (existing /<table>/schema) GET /<table>/<rowkey> => existing behavior (+ your new streaming hotness?) GET /<table>/<optional_prefix>* => existing behavior (+ your new streaming hotness!) GET /<table>/<optional_prefix>*?<filter_args...> => all your new streaming hotness plus implied rowkey prefix filter. I think this starts to look like a more idiomatic rest API. What do you guys think? (We should also figure out and document how a user retrieves their precious data hidden behind the rowkeys '*', 'schema', &c.)
          Hide
          Nick Dimiduk added a comment -

          Moving out to 0.96.1.

          Show
          Nick Dimiduk added a comment - Moving out to 0.96.1.
          Hide
          Devaraj Das added a comment -

          Not sure whether this would work or not but would this work - we host the "scan" at /<table>/scan?<queryParams>. Nick Dimiduk, not meaning to rush it, but maybe we can have the discussion based on your writeup in a follow up jira?

          Show
          Devaraj Das added a comment - Not sure whether this would work or not but would this work - we host the "scan" at /<table>/scan?<queryParams>. Nick Dimiduk , not meaning to rush it, but maybe we can have the discussion based on your writeup in a follow up jira?
          Hide
          Nick Dimiduk added a comment -

          Yes, you're probably right Devaraj Das. Is there a way we can push this into the existing /<table>/scanner API? Currently that endpoint expects a PUT or POST to request a scanner creation. Can we put the GET onto the same endpoint to initiate the streaming connection? At least then all the scanner stuff is in the same place.

          Vandana Ayyalasomayajula, Francis Liu, Andrew Purtell What do you think?

          Show
          Nick Dimiduk added a comment - Yes, you're probably right Devaraj Das . Is there a way we can push this into the existing /<table>/scanner API? Currently that endpoint expects a PUT or POST to request a scanner creation. Can we put the GET onto the same endpoint to initiate the streaming connection? At least then all the scanner stuff is in the same place. Vandana Ayyalasomayajula , Francis Liu , Andrew Purtell What do you think?
          Hide
          Francis Liu added a comment -

          Nick Dimiduk IMHO adding the new streaming scanner api to /<table>/scanner would convolute that resource. I think your original proposal of '<table>/*' (AKA suffix globing in the doc) is inline with existing apis and I'd be more amenable to that. It seems that the suffix globing api only has one query parameter so there shouldn't be any conflicts. Are we trying to avoid adding a new resource, is that the concern?

          Show
          Francis Liu added a comment - Nick Dimiduk IMHO adding the new streaming scanner api to /<table>/scanner would convolute that resource. I think your original proposal of '<table>/*' (AKA suffix globing in the doc) is inline with existing apis and I'd be more amenable to that. It seems that the suffix globing api only has one query parameter so there shouldn't be any conflicts. Are we trying to avoid adding a new resource, is that the concern?
          Hide
          Nick Dimiduk added a comment -

          The reason I suggest /<table>/scanner is because that's already where you go to scan over data in the table. The parameters outlined in this ticket's description are almost identical to the existing scanner parameters. By making them identical and having the client hit the same location, the user API maintains consistency.

          Show
          Nick Dimiduk added a comment - The reason I suggest /<table>/scanner is because that's already where you go to scan over data in the table. The parameters outlined in this ticket's description are almost identical to the existing scanner parameters. By making them identical and having the client hit the same location, the user API maintains consistency.
          Hide
          Francis Liu added a comment -

          I see thanks for the explanation. The reason I'm saying it's convoluting the api is that semantically the current api exposes scanner as a resource, you can create one, iterate through it and remove it. While the scanner we are proposing is a method (as opposed to a resource) on a table resource. Intuitively it would seem a get on the scanner resource should return information about the scanner. Hence my favoring your original proposal. I don't think we can really maintain api consistency since the semantics are different so might as well see were this new api best fits?

          Show
          Francis Liu added a comment - I see thanks for the explanation. The reason I'm saying it's convoluting the api is that semantically the current api exposes scanner as a resource, you can create one, iterate through it and remove it. While the scanner we are proposing is a method (as opposed to a resource) on a table resource. Intuitively it would seem a get on the scanner resource should return information about the scanner. Hence my favoring your original proposal. I don't think we can really maintain api consistency since the semantics are different so might as well see were this new api best fits?
          Hide
          Francis Liu added a comment -

          *Sorry meant convolute the resource.

          Show
          Francis Liu added a comment - *Sorry meant convolute the resource.
          Hide
          Nick Dimiduk added a comment -

          Yeah. I guess my issue comes down to the scanner as a resource at all. It's not a data component or part of addressing data, so IMHO the user shouldn't be thinking about it – it's just an artifact of how the current API facilitates data retrieval.

          I hear what you're saying though. Since your proposal fits relatively well with the existing semantics of GET /<table>/<rowprefix>*, let's see what the API and code look like if you attach it there. Including some invocation examples will be very helpful.

          Thanks for bearing with me as I don the usability hat.

          Show
          Nick Dimiduk added a comment - Yeah. I guess my issue comes down to the scanner as a resource at all. It's not a data component or part of addressing data, so IMHO the user shouldn't be thinking about it – it's just an artifact of how the current API facilitates data retrieval. I hear what you're saying though. Since your proposal fits relatively well with the existing semantics of GET /<table>/<rowprefix>* , let's see what the API and code look like if you attach it there. Including some invocation examples will be very helpful. Thanks for bearing with me as I don the usability hat.
          Hide
          Andrew Purtell added a comment -

          Moving out of 0.98.0

          Show
          Andrew Purtell added a comment - Moving out of 0.98.0
          Hide
          Vandana Ayyalasomayajula added a comment -

          Attached new patch with changes for scanning table using:
          GET /<table>/*

          Show
          Vandana Ayyalasomayajula added a comment - Attached new patch with changes for scanning table using: GET /<table>/*
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12622713/HBASE-9343_trunk.03.patch
          against trunk revision .
          ATTACHMENT ID: 12622713

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 16 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces the following lines longer than 100:
          + rModel.addCell(new CellModel(CellUtil.cloneFamily(kv), CellUtil.cloneQualifier(kv), kv.getTimestamp(),
          + @DefaultValue(Integer.MAX_VALUE + "") @QueryParam(Constants.SCAN_LIMIT) int userRequestedLimit,
          + @DefaultValue(Integer.MAX_VALUE + "") @QueryParam(Constants.SCAN_LIMIT) int userRequestedLimit,

          -1 site. The patch appears to cause mvn site goal to fail.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.util.TestHBaseFsck

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12622713/HBASE-9343_trunk.03.patch against trunk revision . ATTACHMENT ID: 12622713 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 16 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + rModel.addCell(new CellModel(CellUtil.cloneFamily(kv), CellUtil.cloneQualifier(kv), kv.getTimestamp(), + @DefaultValue(Integer.MAX_VALUE + "") @QueryParam(Constants.SCAN_LIMIT) int userRequestedLimit, + @DefaultValue(Integer.MAX_VALUE + "") @QueryParam(Constants.SCAN_LIMIT) int userRequestedLimit, -1 site . The patch appears to cause mvn site goal to fail. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.util.TestHBaseFsck Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8407//console This message is automatically generated.
          Hide
          Vandana Ayyalasomayajula added a comment -

          This patch fixes the line lengths issue raised by HadoopQA. Also the behavior now is the following:

          GET /<table>/<rowkey> => existing behavior
          GET /<table>/<optional_prefix>* => new streaming scanner with prefix filter.
          GET /<table>/<optional_prefix>*?<scan_args...> => new streaming scanner with prefix filter and scan parameters.
          GET /<table>/* => new streaming scanner
          GET /<table>/*?<scan_args...> => new streaming scanner with scan parameters.

          Show
          Vandana Ayyalasomayajula added a comment - This patch fixes the line lengths issue raised by HadoopQA. Also the behavior now is the following: GET /<table>/<rowkey> => existing behavior GET /<table>/<optional_prefix>* => new streaming scanner with prefix filter. GET /<table>/<optional_prefix>*?<scan_args...> => new streaming scanner with prefix filter and scan parameters. GET /<table>/* => new streaming scanner GET /<table>/*?<scan_args...> => new streaming scanner with scan parameters.
          Hide
          Vandana Ayyalasomayajula added a comment -

          Nick Dimiduk – If you have time, could you please review the latest patch ? Thanks!

          Show
          Vandana Ayyalasomayajula added a comment - Nick Dimiduk – If you have time, could you please review the latest patch ? Thanks!
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12622756/HBASE-9343_trunk.04.patch
          against trunk revision .
          ATTACHMENT ID: 12622756

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 16 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12622756/HBASE-9343_trunk.04.patch against trunk revision . ATTACHMENT ID: 12622756 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 16 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8413//console This message is automatically generated.
          Hide
          Andrew Purtell added a comment -

          We have been neglecting this issue, I apologize.

          I am inclined to commit this on the grounds of having had several review cycles and being driven by user need. Anyone disagree?

          Show
          Andrew Purtell added a comment - We have been neglecting this issue, I apologize. I am inclined to commit this on the grounds of having had several review cycles and being driven by user need. Anyone disagree?
          Hide
          Andrew Purtell added a comment -

          The only thing I would ask is an update to the documentation on the new behaviors.

          Show
          Andrew Purtell added a comment - The only thing I would ask is an update to the documentation on the new behaviors.
          Hide
          Nick Dimiduk added a comment -

          This is really nice, Vandana Ayyalasomayajula! I think this will make using this API a lot more intuitive for web developers. Per Andrew's request, a new section added to the rest package javadoc would be fantastic. Do you see deprecation of the existing /<table>/scanner resources in a future patch?

          I do have one question though, which is: how does this interact with the existing row-based suffix-globbing? Are these APIs compatible? Your new goodness should be a superset of that functionality, right?

          Andrew Purtell: Pending some docs, are you keen on letting this slip into your RC?

          Show
          Nick Dimiduk added a comment - This is really nice, Vandana Ayyalasomayajula ! I think this will make using this API a lot more intuitive for web developers. Per Andrew's request, a new section added to the rest package javadoc would be fantastic. Do you see deprecation of the existing /<table>/scanner resources in a future patch? I do have one question though, which is: how does this interact with the existing row-based suffix-globbing ? Are these APIs compatible? Your new goodness should be a superset of that functionality, right? Andrew Purtell : Pending some docs, are you keen on letting this slip into your RC?
          Hide
          stack added a comment -

          That is sufficient justification for me Andrew Purtell

          Show
          stack added a comment - That is sufficient justification for me Andrew Purtell
          Hide
          Vandana Ayyalasomayajula added a comment -

          Nick Dimiduk I added testSuffixGlobbingXML test in TestGetAndPutResource to make sure existing row based suffix-globbing behavior is consistent. As mentioned in the above document, it will boil down to scanner with prefix filter.
          Can I add a dependent jira for documentation and old scanner deprecation if needed ?
          Thanks all for quick reviews.

          Show
          Vandana Ayyalasomayajula added a comment - Nick Dimiduk I added testSuffixGlobbingXML test in TestGetAndPutResource to make sure existing row based suffix-globbing behavior is consistent. As mentioned in the above document, it will boil down to scanner with prefix filter. Can I add a dependent jira for documentation and old scanner deprecation if needed ? Thanks all for quick reviews.
          Hide
          Vandana Ayyalasomayajula added a comment -

          The following API will not work since the same parameters need to be specified a differently (as query params) with the new scanner.

          GET /<table>/<rowprefix>*/( <column> ( : <qualifier> )?
          ( , <column> ( : <qualifier> )? )+ )?
          ( / ( <start-timestamp> ',' )? <end-timestamp> )? )?
          ( ?v= <num-versions> )?

          Show
          Vandana Ayyalasomayajula added a comment - The following API will not work since the same parameters need to be specified a differently (as query params) with the new scanner. GET /<table>/<rowprefix>*/( <column> ( : <qualifier> )? ( , <column> ( : <qualifier> )? )+ )? ( / ( <start-timestamp> ',' )? <end-timestamp> )? )? ( ?v= <num-versions> )?
          Hide
          Andrew Purtell added a comment -

          Can I add a dependent jira for documentation and old scanner deprecation if needed ?

          The following API will not work since the same parameters need to be specified a differently (as query params) with the new scanner.

          Yes, this could possibly go into 0.98 if old APIs and behaviors are deprecated in 0.96 and documented as such in the online manual. That would depend on what Stack wants to let in. In any case it looks like we could use an update of this patch that also includes a new section for the online manual on the difference in REST API before and after this patch. That will help us evaluate what branches it should ultimately go into.

          Show
          Andrew Purtell added a comment - Can I add a dependent jira for documentation and old scanner deprecation if needed ? The following API will not work since the same parameters need to be specified a differently (as query params) with the new scanner. Yes, this could possibly go into 0.98 if old APIs and behaviors are deprecated in 0.96 and documented as such in the online manual. That would depend on what Stack wants to let in. In any case it looks like we could use an update of this patch that also includes a new section for the online manual on the difference in REST API before and after this patch. That will help us evaluate what branches it should ultimately go into.
          Hide
          Nick Dimiduk added a comment -

          The following API will not work since the same parameters need to be specified a differently (as query params) with the new scanner.

          That's what I was afraid of. We'll need to make sure we stage the API change responsibly. @stack, your attention as 0.96 RM is requested.

          Show
          Nick Dimiduk added a comment - The following API will not work since the same parameters need to be specified a differently (as query params) with the new scanner. That's what I was afraid of. We'll need to make sure we stage the API change responsibly. @stack, your attention as 0.96 RM is requested.
          Hide
          Nick Dimiduk added a comment -

          Derp. stack ^^^

          Show
          Nick Dimiduk added a comment - Derp. stack ^^^
          Hide
          stack added a comment -

          Not for 0.96 right? That is long gone. Hurry up for 0.98?

          Show
          stack added a comment - Not for 0.96 right? That is long gone. Hurry up for 0.98?
          Hide
          Nick Dimiduk added a comment -

          No, not the new feature for 0.96. I was thinking in mind of deprecating APIs.

          Do we only introduce new deprecation markers in a point release, not patch releases?

          Show
          Nick Dimiduk added a comment - No, not the new feature for 0.96. I was thinking in mind of deprecating APIs. Do we only introduce new deprecation markers in a point release, not patch releases?
          Hide
          Andrew Purtell added a comment -

          This shouldn't go into 0.98 unless the old behaviors are deprecated in 0.96. There should be decorations and printed warnings and such. Otherwise, we could put those things into 0.98 and the feature into whatever comes after.

          Show
          Andrew Purtell added a comment - This shouldn't go into 0.98 unless the old behaviors are deprecated in 0.96. There should be decorations and printed warnings and such. Otherwise, we could put those things into 0.98 and the feature into whatever comes after.
          Hide
          stack added a comment -

          IMO, deprecation after major release – in point release – doesn't count. No harm adding them though.

          Show
          stack added a comment - IMO, deprecation after major release – in point release – doesn't count. No harm adding them though.
          Hide
          Nick Dimiduk added a comment -

          Thank you, gentlemen, for the clarification.

          Show
          Nick Dimiduk added a comment - Thank you, gentlemen, for the clarification.
          Hide
          Vandana Ayyalasomayajula added a comment -

          We would like to have this patch to go in 0.96 as well as 0.98 . I can create two separated patches depending on what is required to be done for each branch.

          Show
          Vandana Ayyalasomayajula added a comment - We would like to have this patch to go in 0.96 as well as 0.98 . I can create two separated patches depending on what is required to be done for each branch.
          Hide
          Andrew Purtell added a comment -

          IMO, deprecation after major release – in point release – doesn't count. No harm adding them though.

          So then we only get one chance at each .0 to do deprecations? That is ... limiting. I tend to look at each major release as one product. Any 0.94.x is 0.94, any 0.96.x is 0.96, and such. My expectation is deploys are following along the minor version progression. Therefore they will notice a deprecation. Every user is different, but we have a choice to generalize in a way that is really restrictive on us or one that gives us more flexibility to get things out in usable releases sooner.

          Show
          Andrew Purtell added a comment - IMO, deprecation after major release – in point release – doesn't count. No harm adding them though. So then we only get one chance at each .0 to do deprecations? That is ... limiting. I tend to look at each major release as one product. Any 0.94.x is 0.94, any 0.96.x is 0.96, and such. My expectation is deploys are following along the minor version progression. Therefore they will notice a deprecation. Every user is different, but we have a choice to generalize in a way that is really restrictive on us or one that gives us more flexibility to get things out in usable releases sooner.
          Hide
          Andrew Purtell added a comment -

          We would like to have this patch to go in 0.96 as well as 0.98 . I can create two separated patches depending on what is required to be done for each branch.

          This patch can't go into 0.96. That should be clear. We are talking about deprecations int 0.96 that make it possible to put it into 0.98.

          Show
          Andrew Purtell added a comment - We would like to have this patch to go in 0.96 as well as 0.98 . I can create two separated patches depending on what is required to be done for each branch. This patch can't go into 0.96. That should be clear. We are talking about deprecations int 0.96 that make it possible to put it into 0.98.
          Hide
          stack added a comment -

          We need to get to 1.0 (smile). I'd be fine if we agree deprecation in a point release 'counts'. We can revisit when up on 1.0.

          Vandana Ayyalasomayajula Could only go into 0.96 if it didn't break current REST (0.98 is wire and API compatiblie w/ 0.96 FYI).

          Show
          stack added a comment - We need to get to 1.0 (smile). I'd be fine if we agree deprecation in a point release 'counts'. We can revisit when up on 1.0. Vandana Ayyalasomayajula Could only go into 0.96 if it didn't break current REST (0.98 is wire and API compatiblie w/ 0.96 FYI).
          Hide
          Andrew Purtell added a comment -

          0.98 should be wire compatible with 0.96, unless we have deprecated something in 0.96 as fair warning, if we agree that deprecation in point releases counts, at least for now. This change breaks wire compatibility so we need fair warning allowed into 0.96 in order for the feature itself to go into 0.98.

          Show
          Andrew Purtell added a comment - 0.98 should be wire compatible with 0.96, unless we have deprecated something in 0.96 as fair warning, if we agree that deprecation in point releases counts, at least for now. This change breaks wire compatibility so we need fair warning allowed into 0.96 in order for the feature itself to go into 0.98.
          Hide
          Vandana Ayyalasomayajula added a comment -

          I have attached a new patch that maintains compatability with older APIs:

          GET /<table>/<rowkey> => existing behavior
          GET /<table>/<optional_prefix>*/<columns>/ => existing behavior
          GET /<table>/<optional_prefix>* => new streaming scanner with prefix filter.
          GET /<table>/<optional_prefix>*?<scan_args...> => new streaming scanner with prefix filter and scan parameters.
          GET /<table>/* => new streaming scanner
          GET /<table>/*?<scan_args...> => new streaming scanner with scan parameters

          Show
          Vandana Ayyalasomayajula added a comment - I have attached a new patch that maintains compatability with older APIs: GET /<table>/<rowkey> => existing behavior GET /<table>/<optional_prefix>*/<columns>/ => existing behavior GET /<table>/<optional_prefix>* => new streaming scanner with prefix filter. GET /<table>/<optional_prefix>*?<scan_args...> => new streaming scanner with prefix filter and scan parameters. GET /<table>/* => new streaming scanner GET /<table>/*?<scan_args...> => new streaming scanner with scan parameters
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12623219/HBASE-9343_trunk.05.patch
          against trunk revision .
          ATTACHMENT ID: 12623219

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 16 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12623219/HBASE-9343_trunk.05.patch against trunk revision . ATTACHMENT ID: 12623219 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 16 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8438//console This message is automatically generated.
          Hide
          Andrew Purtell added a comment - - edited

          I have attached a new patch that maintains compatability with older APIs:

          Great, I have no further concerns about putting in trunk or 0.98. Thanks.
          Edit: But I don't see doc updates in the new patch for the new behaviors. Not a must but would be great.

          Show
          Andrew Purtell added a comment - - edited I have attached a new patch that maintains compatability with older APIs: Great, I have no further concerns about putting in trunk or 0.98. Thanks. Edit: But I don't see doc updates in the new patch for the new behaviors. Not a must but would be great.
          Hide
          Vandana Ayyalasomayajula added a comment -

          Andrew Purtell Can I create a new jira for documenting the new scanner ?

          Show
          Vandana Ayyalasomayajula added a comment - Andrew Purtell Can I create a new jira for documenting the new scanner ?
          Hide
          Andrew Purtell added a comment -

          Can we have the doc before cutting the 0.98 RC (within days)?

          Show
          Andrew Purtell added a comment - Can we have the doc before cutting the 0.98 RC (within days)?
          Hide
          Vandana Ayyalasomayajula added a comment -

          Andrew Purtell Definitely. I will start working on documentation asap.

          Show
          Vandana Ayyalasomayajula added a comment - Andrew Purtell Definitely. I will start working on documentation asap.
          Hide
          Andrew Purtell added a comment -

          Ok, +1 to commit the latest patch with a follow up issue for docs. I can do this for trunk and 0.98 in a bit. Feel free to do so ahead of me if you like Nick Dimiduk

          Show
          Andrew Purtell added a comment - Ok, +1 to commit the latest patch with a follow up issue for docs. I can do this for trunk and 0.98 in a bit. Feel free to do so ahead of me if you like Nick Dimiduk
          Hide
          Nick Dimiduk added a comment -

          Let me be pedantic for a moment here.

          GET /<table>/<rowkey> => existing behavior
          GET /<table>/<optional_prefix>*/<columns>/ => existing behavior
          GET /<table>/<optional_prefix>* => new streaming scanner with prefix filter.
          GET /<table>/<optional_prefix>*?<scan_args...> => new streaming scanner with prefix filter and scan parameters.
          GET /<table>/* => new streaming scanner
          GET /<table>/*?<scan_args...> => new streaming scanner with scan parameters

          These explicitly overlap with the existing documented behavior in the package-info. Specifically, I'm looking at the suffix-globbing functionality. Meaning, I think your patch overrides existing APIs on

          GET /<table>/<optional_prefix>*
          GET /<table>/<optional_prefix>*?<scan_args...>
          GET /<table>/*

          The semantics of your new scanner must match the semantics of the original feature, both in terms of the accepted arguments and the response body. If I understand this correctly, this patch is faithful to argument consistency, but I'm concerned that it's inconsistent in the response body by design – it's a streamed response instead of the existing response. I'm not an expert in the HTTP protocol spec; is this difference going to be transparent to or break existing clients?

          Show
          Nick Dimiduk added a comment - Let me be pedantic for a moment here. GET /<table>/<rowkey> => existing behavior GET /<table>/<optional_prefix>*/<columns>/ => existing behavior GET /<table>/<optional_prefix>* => new streaming scanner with prefix filter. GET /<table>/<optional_prefix>*?<scan_args...> => new streaming scanner with prefix filter and scan parameters. GET /<table>/* => new streaming scanner GET /<table>/*?<scan_args...> => new streaming scanner with scan parameters These explicitly overlap with the existing documented behavior in the package-info . Specifically, I'm looking at the suffix-globbing functionality. Meaning, I think your patch overrides existing APIs on GET /<table>/<optional_prefix>* GET /<table>/<optional_prefix>*?<scan_args...> GET /<table>/* The semantics of your new scanner must match the semantics of the original feature, both in terms of the accepted arguments and the response body. If I understand this correctly, this patch is faithful to argument consistency, but I'm concerned that it's inconsistent in the response body by design – it's a streamed response instead of the existing response. I'm not an expert in the HTTP protocol spec; is this difference going to be transparent to or break existing clients?
          Hide
          Vandana Ayyalasomayajula added a comment -

          I think the new scanner overrides the following existing APIs

          GET /<table>/<optional_prefix>*
          GET /<table>/*

          Scan arguments are not taken as query parameters in the existing APIs. I added unit tests testSuffixGlobbingXMLWithNewScanner() and testSuffixGlobbingXML() to make sure the responses are not different. The new scanner uses the same RowModel class for streaming, hidden under CellSetModelStream class ( TableScanResource.java). The XML tags are the same, so the unmarshaller unmarshalls the response body in the same manner.

          Show
          Vandana Ayyalasomayajula added a comment - I think the new scanner overrides the following existing APIs GET /<table>/<optional_prefix>* GET /<table>/* Scan arguments are not taken as query parameters in the existing APIs. I added unit tests testSuffixGlobbingXMLWithNewScanner() and testSuffixGlobbingXML() to make sure the responses are not different. The new scanner uses the same RowModel class for streaming, hidden under CellSetModelStream class ( TableScanResource.java). The XML tags are the same, so the unmarshaller unmarshalls the response body in the same manner.
          Hide
          Jimmy Xiang added a comment -

          1. If start row, end row and limit not specified, then the whole table will be scanned.

          This sounds to be not good. If the table is huge, are we still going to return the whole table? I was wondering if we should have a default max data size, and if the max cap is reached, return what we have so far with a flag saying there are more data to fetch. Without a cap, the REST server will be easily OOM.

          Show
          Jimmy Xiang added a comment - 1. If start row, end row and limit not specified, then the whole table will be scanned. This sounds to be not good. If the table is huge, are we still going to return the whole table? I was wondering if we should have a default max data size, and if the max cap is reached, return what we have so far with a flag saying there are more data to fetch. Without a cap, the REST server will be easily OOM.
          Hide
          Nick Dimiduk added a comment -

          the unmarshaller unmarshalls the response body in the same manner.

          Thanks, that's exactly what I needed to hear.

          If the table is huge, are we still going to return the whole table?

          Yes, that's the behavior of today, AFAIK. At least, with the new streaming response, a single client won't overrun a singe gateway instance anymore. Does that sound right, Vandana Ayyalasomayajula?

          Pending any further objections, I'll commit to 0.98 and trunk this afternoon. Let's get HBASE-10346 in soon too so as to not keep the 0.98RC waiting.

          Show
          Nick Dimiduk added a comment - the unmarshaller unmarshalls the response body in the same manner. Thanks, that's exactly what I needed to hear. If the table is huge, are we still going to return the whole table? Yes, that's the behavior of today, AFAIK. At least, with the new streaming response, a single client won't overrun a singe gateway instance anymore. Does that sound right, Vandana Ayyalasomayajula ? Pending any further objections, I'll commit to 0.98 and trunk this afternoon. Let's get HBASE-10346 in soon too so as to not keep the 0.98RC waiting.
          Hide
          Nick Dimiduk added a comment -

          With the latest patch here that maintains backward compatibility, is there anything to do for the next 0.96 patch release? Sounds to me like not...

          Show
          Nick Dimiduk added a comment - With the latest patch here that maintains backward compatibility, is there anything to do for the next 0.96 patch release? Sounds to me like not...
          Hide
          Andrew Purtell added a comment -

          With the latest patch here that maintains backward compatibility, is there anything to do for the next 0.96 patch release? Sounds to me like not...

          No

          Show
          Andrew Purtell added a comment - With the latest patch here that maintains backward compatibility, is there anything to do for the next 0.96 patch release? Sounds to me like not... No
          Hide
          Vandana Ayyalasomayajula added a comment -

          Jimmy Xiang – I currently have the default limit to Integer.MAX, I can change it to some good value. I can take that up as a dependent JIRA. What do you think a good value is ? We send the rows lazily to the client, so server being OOM is less likely issue. Even in that case, the client can resend the request with updated scan parameters, so that scanning can be continued.

          Show
          Vandana Ayyalasomayajula added a comment - Jimmy Xiang – I currently have the default limit to Integer.MAX, I can change it to some good value. I can take that up as a dependent JIRA. What do you think a good value is ? We send the rows lazily to the client, so server being OOM is less likely issue. Even in that case, the client can resend the request with updated scan parameters, so that scanning can be continued.
          Hide
          Jimmy Xiang added a comment -

          Can we at least make it configurable? It is fine with me to do it in a separate issue. Thanks.

          Show
          Jimmy Xiang added a comment - Can we at least make it configurable? It is fine with me to do it in a separate issue. Thanks.
          Hide
          Nick Dimiduk added a comment -

          Committed to 0.98 and trunk. Thanks for another nice improvement, Vandana Ayyalasomayajula!

          Show
          Nick Dimiduk added a comment - Committed to 0.98 and trunk. Thanks for another nice improvement, Vandana Ayyalasomayajula !
          Hide
          Hudson added a comment -

          FAILURE: Integrated in HBase-0.98 #89 (See https://builds.apache.org/job/HBase-0.98/89/)
          HBASE-9343 Implement stateless scanner for Stargate (Vandana Ayyalasomayajula) (ndimiduk: rev 1558995)

          • /hbase/branches/0.98/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSource.java
          • /hbase/branches/0.98/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java
          • /hbase/branches/0.98/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/Constants.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/MetricsREST.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/ProtobufStreamingUtil.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableScanResource.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Client.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Response.java
          • /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java
          • /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java
          • /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java
          Show
          Hudson added a comment - FAILURE: Integrated in HBase-0.98 #89 (See https://builds.apache.org/job/HBase-0.98/89/ ) HBASE-9343 Implement stateless scanner for Stargate (Vandana Ayyalasomayajula) (ndimiduk: rev 1558995) /hbase/branches/0.98/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSource.java /hbase/branches/0.98/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java /hbase/branches/0.98/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/Constants.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/MetricsREST.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/ProtobufStreamingUtil.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableScanResource.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Client.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Response.java /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK #4829 (See https://builds.apache.org/job/HBase-TRUNK/4829/)
          HBASE-9343 Implement stateless scanner for Stargate (Vandana Ayyalasomayajula) (ndimiduk: rev 1558994)

          • /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSource.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/Constants.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/MetricsREST.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/ProtobufStreamingUtil.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableScanResource.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Client.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Response.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK #4829 (See https://builds.apache.org/job/HBase-TRUNK/4829/ ) HBASE-9343 Implement stateless scanner for Stargate (Vandana Ayyalasomayajula) (ndimiduk: rev 1558994) /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSource.java /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/Constants.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/MetricsREST.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/ProtobufStreamingUtil.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableScanResource.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Client.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Response.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #81 (See https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/81/)
          HBASE-9343 Implement stateless scanner for Stargate (Vandana Ayyalasomayajula) (ndimiduk: rev 1558995)

          • /hbase/branches/0.98/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSource.java
          • /hbase/branches/0.98/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java
          • /hbase/branches/0.98/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/Constants.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/MetricsREST.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/ProtobufStreamingUtil.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableScanResource.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Client.java
          • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Response.java
          • /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java
          • /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java
          • /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #81 (See https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/81/ ) HBASE-9343 Implement stateless scanner for Stargate (Vandana Ayyalasomayajula) (ndimiduk: rev 1558995) /hbase/branches/0.98/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSource.java /hbase/branches/0.98/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java /hbase/branches/0.98/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/Constants.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/MetricsREST.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/ProtobufStreamingUtil.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableScanResource.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Client.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Response.java /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #56 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/56/)
          HBASE-9343 Implement stateless scanner for Stargate (Vandana Ayyalasomayajula) (ndimiduk: rev 1558994)

          • /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSource.java
          • /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java
          • /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/Constants.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/MetricsREST.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/ProtobufStreamingUtil.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableScanResource.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Client.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Response.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #56 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/56/ ) HBASE-9343 Implement stateless scanner for Stargate (Vandana Ayyalasomayajula) (ndimiduk: rev 1558994) /hbase/trunk/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSource.java /hbase/trunk/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java /hbase/trunk/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/rest/MetricsRESTSourceImpl.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/Constants.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/MetricsREST.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/ProtobufStreamingUtil.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/TableScanResource.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Client.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/Response.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java
          Hide
          Enis Soztutar added a comment -

          Closing this issue after 0.99.0 release.

          Show
          Enis Soztutar added a comment - Closing this issue after 0.99.0 release.

            People

            • Assignee:
              Vandana Ayyalasomayajula
              Reporter:
              Vandana Ayyalasomayajula
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development