HBase
  1. HBase
  2. HBASE-5728

Methods Missing in HTableInterface

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.94.2
    • Component/s: Client
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Dear all,

      I found some methods existed in HTable were not in HTableInterface.

      setAutoFlush
      setWriteBufferSize
      ...

      In most cases, I manipulate HBase through HTableInterface from HTablePool. If I need to use the above methods, how to do that?

      I am considering writing my own table pool if no proper ways. Is it fine?

      Thanks so much!

      Best regards,
      Bing

      1. trunk-5728.patch
        8 kB
        Jimmy Xiang
      2. trunk-5728_v4.patch
        6 kB
        Jimmy Xiang
      3. trunk-5728_v3.patch
        8 kB
        Jimmy Xiang
      4. trunk-5728_v2.patch
        10 kB
        Jimmy Xiang

        Issue Links

          Activity

          Hide
          stack added a comment -

          Fix up after bulk move overwrote some 0.94.2 fix versions w/ 0.95.0 (Noticed by Lars Hofhansl)

          Show
          stack added a comment - Fix up after bulk move overwrote some 0.94.2 fix versions w/ 0.95.0 (Noticed by Lars Hofhansl)
          Hide
          Hudson added a comment -

          Integrated in HBase-0.94-security-on-Hadoop-23 #7 (See https://builds.apache.org/job/HBase-0.94-security-on-Hadoop-23/7/)
          HBASE-5728 Methods Missing in HTableInterface (Revision 1373480)

          Result = FAILURE
          jxiang :
          Files :

          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
          Show
          Hudson added a comment - Integrated in HBase-0.94-security-on-Hadoop-23 #7 (See https://builds.apache.org/job/HBase-0.94-security-on-Hadoop-23/7/ ) HBASE-5728 Methods Missing in HTableInterface (Revision 1373480) Result = FAILURE jxiang : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
          Hide
          Hudson added a comment -

          Integrated in HBase-0.94-security #48 (See https://builds.apache.org/job/HBase-0.94-security/48/)
          HBASE-5728 Methods Missing in HTableInterface (Revision 1373480)

          Result = FAILURE
          jxiang :
          Files :

          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
          Show
          Hudson added a comment - Integrated in HBase-0.94-security #48 (See https://builds.apache.org/job/HBase-0.94-security/48/ ) HBASE-5728 Methods Missing in HTableInterface (Revision 1373480) Result = FAILURE jxiang : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
          Hide
          Jimmy Xiang added a comment -

          Autoflush or not implies there is a buffer too. However, I think the buffer itself should be kept hidden from the interface. Otherwise, client app could mess around with it, which should be prevented, right?

          Show
          Jimmy Xiang added a comment - Autoflush or not implies there is a buffer too. However, I think the buffer itself should be kept hidden from the interface. Otherwise, client app could mess around with it, which should be prevented, right?
          Hide
          Lars George added a comment -

          Sorry for being late here, but one thing that irks me is that you can set the buffer size, which already implies what the given values means, but have no means to return the buffer itself. Should we create a new JIRA that abstracts the write buffer into an interface that is returned? That way we can add it to HTableInterface as "public WriteBuffer getWriteBuffer()" or some such. Then the write buffer size as a long could be interpreted as whatever the actual implementation chooses?

          Show
          Lars George added a comment - Sorry for being late here, but one thing that irks me is that you can set the buffer size, which already implies what the given values means, but have no means to return the buffer itself. Should we create a new JIRA that abstracts the write buffer into an interface that is returned? That way we can add it to HTableInterface as "public WriteBuffer getWriteBuffer()" or some such. Then the write buffer size as a long could be interpreted as whatever the actual implementation chooses?
          Hide
          Hudson added a comment -

          Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #131 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/131/)
          HBASE-5728 Methods Missing in HTableInterface (Revision 1373481)

          Result = FAILURE
          jxiang :
          Files :

          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
          Show
          Hudson added a comment - Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #131 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/131/ ) HBASE-5728 Methods Missing in HTableInterface (Revision 1373481) Result = FAILURE jxiang : Files : /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
          Hide
          Hudson added a comment -

          Integrated in HBase-0.94 #393 (See https://builds.apache.org/job/HBase-0.94/393/)
          HBASE-5728 Methods Missing in HTableInterface (Revision 1373480)

          Result = FAILURE
          jxiang :
          Files :

          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
          Show
          Hudson added a comment - Integrated in HBase-0.94 #393 (See https://builds.apache.org/job/HBase-0.94/393/ ) HBASE-5728 Methods Missing in HTableInterface (Revision 1373480) Result = FAILURE jxiang : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
          Hide
          Hudson added a comment -

          Integrated in HBase-TRUNK #3217 (See https://builds.apache.org/job/HBase-TRUNK/3217/)
          HBASE-5728 Methods Missing in HTableInterface (Revision 1373481)

          Result = FAILURE
          jxiang :
          Files :

          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
          Show
          Hudson added a comment - Integrated in HBase-TRUNK #3217 (See https://builds.apache.org/job/HBase-TRUNK/3217/ ) HBASE-5728 Methods Missing in HTableInterface (Revision 1373481) Result = FAILURE jxiang : Files : /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
          Hide
          Jimmy Xiang added a comment -

          Thanks everyone. Patch v4 was integrated to 0.96 and 0.94.2.

          Show
          Jimmy Xiang added a comment - Thanks everyone. Patch v4 was integrated to 0.96 and 0.94.2.
          Hide
          Lars Hofhansl added a comment -

          Let's put this into 0.94 as well. Unless we're worried about folks who have created their own implementations of HTableInterface.

          Show
          Lars Hofhansl added a comment - Let's put this into 0.94 as well. Unless we're worried about folks who have created their own implementations of HTableInterface.
          Hide
          Lars Hofhansl added a comment -

          +1 on v4

          Show
          Lars Hofhansl added a comment - +1 on v4
          Hide
          Jimmy Xiang added a comment -

          isAutoFlush() is already in the interface. Just adding those auto flush related methods should not affect the purity of the interface. Let's go with v4, how is that?

          Show
          Jimmy Xiang added a comment - isAutoFlush() is already in the interface. Just adding those auto flush related methods should not affect the purity of the interface. Let's go with v4, how is that?
          Hide
          Lars Hofhansl added a comment -

          I'm fine either way. If we do not want to keep the interface pure I'm all for adding the *key methods there as well.

          Show
          Lars Hofhansl added a comment - I'm fine either way. If we do not want to keep the interface pure I'm all for adding the *key methods there as well.
          Hide
          Jimmy Xiang added a comment -

          @Lars, I see. So should we go with patch 1?

          Show
          Jimmy Xiang added a comment - @Lars, I see. So should we go with patch 1?
          Hide
          Andrew Purtell added a comment -

          v4 removed getRegionLocations. In TableResource/RegionsResource, we can use HBaseAdmin to get such info instead of using HTablePool.

          +1

          Show
          Andrew Purtell added a comment - v4 removed getRegionLocations. In TableResource/RegionsResource, we can use HBaseAdmin to get such info instead of using HTablePool. +1
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12540981/trunk-5728_v4.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 9 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12540981/trunk-5728_v4.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 9 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2570//console This message is automatically generated.
          Hide
          Lars Hofhansl added a comment -

          @Jimmy: As an example we use these to parallelize scans across multiple region servers. But we just use HTables for that.

          Show
          Lars Hofhansl added a comment - @Jimmy: As an example we use these to parallelize scans across multiple region servers. But we just use HTables for that.
          Hide
          Jonathan Hsieh added a comment -

          +1 v4. Maybe file another jira if other metadata ops are in HTable and should be moved. Lars this sound good to you?

          Show
          Jonathan Hsieh added a comment - +1 v4. Maybe file another jira if other metadata ops are in HTable and should be moved. Lars this sound good to you?
          Hide
          Jimmy Xiang added a comment -

          @Lars, yes, we can add all useful methods to the interface. For now, I am not sure if the start/end key methods are useful. They are for MR. getRegionLocations is for MR too.

          Show
          Jimmy Xiang added a comment - @Lars, yes, we can add all useful methods to the interface. For now, I am not sure if the start/end key methods are useful. They are for MR. getRegionLocations is for MR too.
          Hide
          Jimmy Xiang added a comment -

          v4 removed getRegionLocations. In TableResource/RegionsResource, we can use HBaseAdmin to get such info instead of using HTablePool.

          Show
          Jimmy Xiang added a comment - v4 removed getRegionLocations. In TableResource/RegionsResource, we can use HBaseAdmin to get such info instead of using HTablePool.
          Hide
          Lars Hofhansl added a comment -

          @Jimmy: Once we agreed to option #2 above let's just add all useful methods to the interface, that would include the start/end key methods.
          @Jon: Excellent point. Maybe all metadata type operations should be removed from HTable.

          Show
          Lars Hofhansl added a comment - @Jimmy: Once we agreed to option #2 above let's just add all useful methods to the interface, that would include the start/end key methods. @Jon: Excellent point. Maybe all metadata type operations should be removed from HTable.
          Hide
          Jonathan Hsieh added a comment -

          Hm.. why isn't

          public NavigableMap<HRegionInfo, ServerName> getRegionLocations() throws IOException
          

          in HBaseAdmin or something like that instead of HTable?

          HBaseAdmin seems to have these already which seem very related.

          List<HRegionInfo>	getTableRegions(byte[] tableName) 
          List<HRegionInfo>	getOnlineRegions(ServerName sn) 
          

          Seems like this is for metadata instead of for getting or putting data like the others.

          Show
          Jonathan Hsieh added a comment - Hm.. why isn't public NavigableMap<HRegionInfo, ServerName> getRegionLocations() throws IOException in HBaseAdmin or something like that instead of HTable? HBaseAdmin seems to have these already which seem very related. List<HRegionInfo> getTableRegions( byte [] tableName) List<HRegionInfo> getOnlineRegions(ServerName sn) Seems like this is for metadata instead of for getting or putting data like the others.
          Hide
          Jimmy Xiang added a comment -

          v3 doesn't have start/end keys methods. We can add them later when needed.

          Show
          Jimmy Xiang added a comment - v3 doesn't have start/end keys methods. We can add them later when needed.
          Hide
          Jimmy Xiang added a comment -

          In v2, there are methods about start/end keys. Should we keep them in HTableInterface?

          Show
          Jimmy Xiang added a comment - In v2, there are methods about start/end keys. Should we keep them in HTableInterface?
          Hide
          Lars Hofhansl added a comment -

          +1 on v2

          Show
          Lars Hofhansl added a comment - +1 on v2
          Hide
          Lars Hofhansl added a comment -

          I'm fine with #2. Returning NavigableMap is good here, because it let's the caller make gymnastics based on the order of the keys, which is sometimes useful.

          Show
          Lars Hofhansl added a comment - I'm fine with #2. Returning NavigableMap is good here, because it let's the caller make gymnastics based on the order of the keys, which is sometimes useful.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12540779/trunk-5728_v2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 9 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.replication.TestReplication

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12540779/trunk-5728_v2.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 9 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.replication.TestReplication Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2560//console This message is automatically generated.
          Hide
          Jimmy Xiang added a comment -

          +1 for 2.

          It seems to me too HTableInterface doesn't need those start/end keys info. So, how about adding the following methods for now?

            public void setAutoFlush(boolean autoFlush);
            public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail);
          
            public long getWriteBufferSize();
            public void setWriteBufferSize(long writeBufferSize) throws IOException;
          
            public NavigableMap<HRegionInfo, ServerName> getRegionLocations() throws IOException;
          

          We can always add more methods if needed later on.

          We can use Map instead of NavigableMap if makes sense.

          Show
          Jimmy Xiang added a comment - +1 for 2. It seems to me too HTableInterface doesn't need those start/end keys info. So, how about adding the following methods for now? public void setAutoFlush(boolean autoFlush); public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail); public long getWriteBufferSize(); public void setWriteBufferSize(long writeBufferSize) throws IOException; public NavigableMap<HRegionInfo, ServerName> getRegionLocations() throws IOException; We can always add more methods if needed later on. We can use Map instead of NavigableMap if makes sense.
          Hide
          Lars Hofhansl added a comment -

          There might be HTableInterface implementations where it makes no sense to even speak of start and end keys.

          I think we have three options here:

          1. Keep HTableInterface pure as is, and use HTable directly when we need to know about the implementation.
          2. Add all necessary methods to HTableInterface
          3. Create another Interface - extending HTableInterface - which includes the needed implementation specific methods. Then we can decide in the using code what level of interface is needed.

          Not sure which one is best.

          Show
          Lars Hofhansl added a comment - There might be HTableInterface implementations where it makes no sense to even speak of start and end keys. I think we have three options here: Keep HTableInterface pure as is, and use HTable directly when we need to know about the implementation. Add all necessary methods to HTableInterface Create another Interface - extending HTableInterface - which includes the needed implementation specific methods. Then we can decide in the using code what level of interface is needed. Not sure which one is best.
          Hide
          Jimmy Xiang added a comment -

          Added one method:

            public NavigableMap<HRegionInfo, ServerName> getRegionLocations() throws IOException;
          

          It is to replace

          public Map<HRegionInfo, HServerAddress> getRegionsInfo() throws IOException
          

          which is used by TableResource.

          Show
          Jimmy Xiang added a comment - Added one method: public NavigableMap<HRegionInfo, ServerName> getRegionLocations() throws IOException; It is to replace public Map<HRegionInfo, HServerAddress> getRegionsInfo() throws IOException which is used by TableResource.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12540691/trunk-5728.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

          -1 findbugs. The patch appears to introduce 9 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.regionserver.TestServerCustomProtocol

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12540691/trunk-5728.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 9 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.TestServerCustomProtocol Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2555//console This message is automatically generated.
          Hide
          Jimmy Xiang added a comment -

          Based on comments, I put up a patch. It added the following methods to HTableInterface from HTable:

            public byte[][] getStartKeys() throws IOException;
            public byte[][] getEndKeys() throws IOException;
            public Pair<byte[][], byte[][]> getStartEndKeys() throws IOException;
          
            public void setAutoFlush(boolean autoFlush);
            public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail);
          
            public long getWriteBufferSize();
            public void setWriteBufferSize(long writeBufferSize) throws IOException,
          
          Show
          Jimmy Xiang added a comment - Based on comments, I put up a patch. It added the following methods to HTableInterface from HTable: public byte[][] getStartKeys() throws IOException; public byte[][] getEndKeys() throws IOException; public Pair<byte[][], byte[][]> getStartEndKeys() throws IOException; public void setAutoFlush(boolean autoFlush); public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail); public long getWriteBufferSize(); public void setWriteBufferSize(long writeBufferSize) throws IOException,
          Hide
          Lars Hofhansl added a comment -

          These:

          public Map<HRegionInfo, HServerAddress> getRegionsInfo() throws IOException;
          public HRegionLocation getRegionLocation(String row) throws IOException;
          public HRegionLocation getRegionLocation(byte[] row) throws IOException;
          
          public void prewarmRegionCache(Map<HRegionInfo, HServerAddress> regionMap);
          public void clearRegionCache();
          
          public long getWriteBufferSize();
          public void setWriteBufferSize(long writeBufferSize) throws IOException,
          public ArrayList<Put> getWriteBuffer();
          

          Would leak implementation stuff into the interface.
          I think HBASE-4054 specifically mentions, that

          public Map<HRegionInfo, HServerAddress> getRegionsInfo() throws IOException;

          is needed. Hmm...

          Show
          Lars Hofhansl added a comment - These: public Map<HRegionInfo, HServerAddress> getRegionsInfo() throws IOException; public HRegionLocation getRegionLocation( String row) throws IOException; public HRegionLocation getRegionLocation( byte [] row) throws IOException; public void prewarmRegionCache(Map<HRegionInfo, HServerAddress> regionMap); public void clearRegionCache(); public long getWriteBufferSize(); public void setWriteBufferSize( long writeBufferSize) throws IOException, public ArrayList<Put> getWriteBuffer(); Would leak implementation stuff into the interface. I think HBASE-4054 specifically mentions, that public Map<HRegionInfo, HServerAddress> getRegionsInfo() throws IOException; is needed. Hmm...
          Hide
          Lars George added a comment -

          HBASE-4054 needs some of the listed, missing methods to return to instantiating HTableInterface's as opposed to an extended HTable.

          Show
          Lars George added a comment - HBASE-4054 needs some of the listed, missing methods to return to instantiating HTableInterface's as opposed to an extended HTable.
          Hide
          Lars Hofhansl added a comment -
          public int getScannerCaching();
          public void setScannerCaching(int scannerCaching);
          

          These two are deprecated (in trunk at least), let's not add them to the interface.

          Show
          Lars Hofhansl added a comment - public int getScannerCaching(); public void setScannerCaching( int scannerCaching); These two are deprecated (in trunk at least), let's not add them to the interface.
          Hide
          stack added a comment -

          @Bing Yes. If you are up for it.

          @Lars Thanks for doing the research.

          Show
          stack added a comment - @Bing Yes. If you are up for it. @Lars Thanks for doing the research.
          Hide
          Bing Li added a comment -

          Stack,

          Thanks so much! You mean I need to fix the bug myself?

          Best,
          Bing

          Show
          Bing Li added a comment - Stack, Thanks so much! You mean I need to fix the bug myself? Best, Bing
          Hide
          Lars George added a comment -

          Here is what you find only in HTable in 0.90.x:

          public ArrayList<Put> getWriteBuffer();
          public int getCurrentNrHRS() throws IOException;
          public HRegionLocation getRegionLocation(String row) throws IOException;
          public HRegionLocation getRegionLocation(byte[] row) throws IOException;
          public HConnection getConnection();
          public int getScannerCaching();
          public void setScannerCaching(int scannerCaching);
          public byte[][] getStartKeys() throws IOException;
          public byte[][] getEndKeys() throws IOException;
          public Pair<byte[][], byte[][]> getStartEndKeys() throws IOException;
          public Map<HRegionInfo, HServerAddress> getRegionsInfo() throws IOException;
          public void prewarmRegionCache(Map<HRegionInfo, HServerAddress> regionMap);
          public void serializeRegionInfo(DataOutput out) throws IOException;
          public Map<HRegionInfo, HServerAddress> deserializeRegionInfo(DataInput in) throws IOException;
          public void clearRegionCache();
          public void setAutoFlush(boolean autoFlush);
          public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail);  
          public long getWriteBufferSize();
          public void setWriteBufferSize(long writeBufferSize) throws IOException,
          

          Out of those we should check those that should be elevated into the HTableInterface. Sorting them a dropping some serialization related ones, I could think of these to be added:

          public HConnection getConnection();
          
          public int getScannerCaching();
          public void setScannerCaching(int scannerCaching);
          
          public byte[][] getStartKeys() throws IOException;
          public byte[][] getEndKeys() throws IOException;
          public Pair<byte[][], byte[][]> getStartEndKeys() throws IOException;
          public Map<HRegionInfo, HServerAddress> getRegionsInfo() throws IOException;
          public HRegionLocation getRegionLocation(String row) throws IOException;
          public HRegionLocation getRegionLocation(byte[] row) throws IOException;
          
          public void prewarmRegionCache(Map<HRegionInfo, HServerAddress> regionMap);
          public void clearRegionCache();
          
          public void setAutoFlush(boolean autoFlush);
          public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail);  
          
          public long getWriteBufferSize();
          public void setWriteBufferSize(long writeBufferSize) throws IOException,
          public ArrayList<Put> getWriteBuffer();
          

          This is adding a lot to the interface, but we need to really decide what is "generic", and what is an implementation detail needed only for a concrete class implementing HTableInterface.

          I could imagine getConnection() to be very specific to a remote table implementation. This might be not required for, for example, a HTableInterface implementation for coprocessors (i.e. the one returned by the copro context).

          This is open for feedback please!

          Show
          Lars George added a comment - Here is what you find only in HTable in 0.90.x: public ArrayList<Put> getWriteBuffer(); public int getCurrentNrHRS() throws IOException; public HRegionLocation getRegionLocation( String row) throws IOException; public HRegionLocation getRegionLocation( byte [] row) throws IOException; public HConnection getConnection(); public int getScannerCaching(); public void setScannerCaching( int scannerCaching); public byte [][] getStartKeys() throws IOException; public byte [][] getEndKeys() throws IOException; public Pair< byte [][], byte [][]> getStartEndKeys() throws IOException; public Map<HRegionInfo, HServerAddress> getRegionsInfo() throws IOException; public void prewarmRegionCache(Map<HRegionInfo, HServerAddress> regionMap); public void serializeRegionInfo(DataOutput out) throws IOException; public Map<HRegionInfo, HServerAddress> deserializeRegionInfo(DataInput in) throws IOException; public void clearRegionCache(); public void setAutoFlush( boolean autoFlush); public void setAutoFlush( boolean autoFlush, boolean clearBufferOnFail); public long getWriteBufferSize(); public void setWriteBufferSize( long writeBufferSize) throws IOException, Out of those we should check those that should be elevated into the HTableInterface. Sorting them a dropping some serialization related ones, I could think of these to be added: public HConnection getConnection(); public int getScannerCaching(); public void setScannerCaching( int scannerCaching); public byte [][] getStartKeys() throws IOException; public byte [][] getEndKeys() throws IOException; public Pair< byte [][], byte [][]> getStartEndKeys() throws IOException; public Map<HRegionInfo, HServerAddress> getRegionsInfo() throws IOException; public HRegionLocation getRegionLocation( String row) throws IOException; public HRegionLocation getRegionLocation( byte [] row) throws IOException; public void prewarmRegionCache(Map<HRegionInfo, HServerAddress> regionMap); public void clearRegionCache(); public void setAutoFlush( boolean autoFlush); public void setAutoFlush( boolean autoFlush, boolean clearBufferOnFail); public long getWriteBufferSize(); public void setWriteBufferSize( long writeBufferSize) throws IOException, public ArrayList<Put> getWriteBuffer(); This is adding a lot to the interface, but we need to really decide what is "generic", and what is an implementation detail needed only for a concrete class implementing HTableInterface. I could imagine getConnection() to be very specific to a remote table implementation. This might be not required for, for example, a HTableInterface implementation for coprocessors (i.e. the one returned by the copro context). This is open for feedback please!
          Hide
          stack added a comment -

          Want to make a patch Bing?

          Show
          stack added a comment - Want to make a patch Bing?

            People

            • Assignee:
              Jimmy Xiang
              Reporter:
              Bing Li
            • Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development