Details

    • Release Note:
      Coprocessor endpoints can now be implemented as protocol buffer defined Services. CoprocessorProtocol based implementations are now deprecated. See the package javadoc for org.apache.hadoop.hbase.client.coprocessor for implementation details.
    1. HBASE-5448.patch
      357 kB
      Gary Helmling
    2. HBASE-5448_4.patch
      444 kB
      Gary Helmling
    3. HBASE-5448_3.patch
      444 kB
      Gary Helmling
    4. HBASE-5448_2.patch
      380 kB
      Gary Helmling

      Issue Links

        Activity

        Hide
        stack stack added a comment -

        Marking closed.

        Show
        stack stack added a comment - Marking closed.
        Hide
        stack stack added a comment -

        Maybe if I'm feeling especially crazy I'll check that out over the holidays.

        I'd say wait for some demand. Drink eggnog instead!

        Thanks for input on the exception handling (and you too DD).

        Show
        stack stack added a comment - Maybe if I'm feeling especially crazy I'll check that out over the holidays. I'd say wait for some demand. Drink eggnog instead! Thanks for input on the exception handling (and you too DD).
        Hide
        ghelmling Gary Helmling added a comment -

        if (controller.failedOnException()) { throw controller.getFailedOn(); }

        Since this was becoming such a common pattern, I recently added ServerRpcContoller.checkFailed() to do this in a single step.

        Show
        ghelmling Gary Helmling added a comment - if (controller.failedOnException()) { throw controller.getFailedOn(); } Since this was becoming such a common pattern, I recently added ServerRpcContoller.checkFailed() to do this in a single step.
        Hide
        ghelmling Gary Helmling added a comment -

        Stack Comments below:

        A thought. I think this pb-based way of doing dynamic cps elegant after looking at it a while. I know 'elegant' is not the first thing that comes to mind when you have pb and rpc in the mix, but hey, can't keep it to myself.

        I'm glad someone sees it that way It is at least consistent, if a little verbose. I think it could be more "elegant" if we did a custom PB compiler plugin and required service implementors to compile their endpoint definitions with our own script or build target. Then I think we could control the generated service method signatures. Maybe if I'm feeling especially crazy I'll check that out over the holidays. But I wouldn't consider actually shipping that unless it significantly simplified these cases and didn't require additional mass changes.

        One thing I think we have lost though going to this new mechanism is the ability to do generics: i.e. GenericProtocol over in hbase-server/src/test can't be made work now. I believe this so because pb requires you specify a type: https://developers.google.com/protocol-buffers/docs/proto#simple Do you agree G?

        Yes, I agree, though in a way generics don't even apply with the use of protobufs. Services could do more dynamic interpretation of messages, but it would be up to them to implement that in a way that made sense for the specific case. I don't think there's anything we need to do to support this.

        How to do exceptions in a coprocessor? I see where we set the exception on the controller if there is one, but should we then abandon further processing – return? We need to call the RpcCallback done, though, right?

        Yes, the exception should be set on the controller in order to be returned to the client. It seems to be good practice to always call RpcCallback.done(), but it's not strictly required for endpoint implementations and it should also be fine to pass a null argument in the case of an exception. Your implementation looks fine to me, assuming that "sum" is a required field in the proto message, otherwise you could skip setting the dummy value in the response on an exception.

        One additional idea would be to define a custom unchecked exception (EndpointException extends RuntimeException?) which we could watch for and use to set the exception in the controller, but either with this or the current ResponseConverter.setControllerException() we're relying on convention over a real contract, which doesn't seem great.

        Show
        ghelmling Gary Helmling added a comment - Stack Comments below: A thought. I think this pb-based way of doing dynamic cps elegant after looking at it a while. I know 'elegant' is not the first thing that comes to mind when you have pb and rpc in the mix, but hey, can't keep it to myself. I'm glad someone sees it that way It is at least consistent, if a little verbose. I think it could be more "elegant" if we did a custom PB compiler plugin and required service implementors to compile their endpoint definitions with our own script or build target. Then I think we could control the generated service method signatures. Maybe if I'm feeling especially crazy I'll check that out over the holidays. But I wouldn't consider actually shipping that unless it significantly simplified these cases and didn't require additional mass changes. One thing I think we have lost though going to this new mechanism is the ability to do generics: i.e. GenericProtocol over in hbase-server/src/test can't be made work now. I believe this so because pb requires you specify a type: https://developers.google.com/protocol-buffers/docs/proto#simple Do you agree G? Yes, I agree, though in a way generics don't even apply with the use of protobufs. Services could do more dynamic interpretation of messages, but it would be up to them to implement that in a way that made sense for the specific case. I don't think there's anything we need to do to support this. How to do exceptions in a coprocessor? I see where we set the exception on the controller if there is one, but should we then abandon further processing – return? We need to call the RpcCallback done, though, right? Yes, the exception should be set on the controller in order to be returned to the client. It seems to be good practice to always call RpcCallback.done(), but it's not strictly required for endpoint implementations and it should also be fine to pass a null argument in the case of an exception. Your implementation looks fine to me, assuming that "sum" is a required field in the proto message, otherwise you could skip setting the dummy value in the response on an exception. One additional idea would be to define a custom unchecked exception (EndpointException extends RuntimeException?) which we could watch for and use to set the exception in the controller, but either with this or the current ResponseConverter.setControllerException() we're relying on convention over a real contract, which doesn't seem great.
        Hide
        devaraj Devaraj Das added a comment -

        stack, yes, this is how I think it needs to be done.. For examples, please have a look at AggregateClient and AggregateImplementation classes. On the client side, you could do the following (to check/signal exceptions):
        AggregateResponse response = rpcCallback.get();
        if (controller.failedOnException())

        { throw controller.getFailedOn(); }

        [Gary Helmling, please chime in if I missed anything..

        Show
        devaraj Devaraj Das added a comment - stack , yes, this is how I think it needs to be done.. For examples, please have a look at AggregateClient and AggregateImplementation classes. On the client side, you could do the following (to check/signal exceptions): AggregateResponse response = rpcCallback.get(); if (controller.failedOnException()) { throw controller.getFailedOn(); } [ Gary Helmling , please chime in if I missed anything..
        Hide
        stack stack added a comment -

        Gary Helmling One other question G. How to do exceptions in a coprocessor? I see where we set the exception on the controller if there is one, but should we then abandon further processing – return? We need to call the RpcCallback done, though, right?

        Here is example from tail of an endpoint cp implementation:

        ...
            } catch (IOException e) {
              ResponseConverter.setControllerException(controller, e);
              // Set result to -1 to indicate error.
              sumResult = -1;
              LOG.info("Setting sum result to -1 to indicate error", e);
            } finally {
              if (scanner != null) {
                try {
                  scanner.close();
                } catch (IOException e) {
                  ResponseConverter.setControllerException(controller, e);
                  sumResult = -1;
                  LOG.info("Setting sum result to -1 to indicate error", e);
                }
              }
            }
            done.run(SumResponse.newBuilder().setSum(sumResult).build());
          }
        

        Is that how you'd do it?

        Thanks.

        Show
        stack stack added a comment - Gary Helmling One other question G. How to do exceptions in a coprocessor? I see where we set the exception on the controller if there is one, but should we then abandon further processing – return? We need to call the RpcCallback done, though, right? Here is example from tail of an endpoint cp implementation: ... } catch (IOException e) { ResponseConverter.setControllerException(controller, e); // Set result to -1 to indicate error. sumResult = -1; LOG.info( "Setting sum result to -1 to indicate error" , e); } finally { if (scanner != null ) { try { scanner.close(); } catch (IOException e) { ResponseConverter.setControllerException(controller, e); sumResult = -1; LOG.info( "Setting sum result to -1 to indicate error" , e); } } } done.run(SumResponse.newBuilder().setSum(sumResult).build()); } Is that how you'd do it? Thanks.
        Hide
        stack stack added a comment -

        Thanks for fixing the null issue G.

        A thought. I think this pb-based way of doing dynamic cps elegant after looking at it a while. I know 'elegant' is not the first thing that comes to mind when you have pb and rpc in the mix, but hey, can't keep it to myself.

        One thing I think we have lost though going to this new mechanism is the ability to do generics: i.e. GenericProtocol over in hbase-server/src/test can't be made work now. I believe this so because pb requires you specify a type: https://developers.google.com/protocol-buffers/docs/proto#simple Do you agree G?

        I think this fine. We just need to doc. it in ref notes above I'd say and be done w/ it.

        Show
        stack stack added a comment - Thanks for fixing the null issue G. A thought. I think this pb-based way of doing dynamic cps elegant after looking at it a while. I know 'elegant' is not the first thing that comes to mind when you have pb and rpc in the mix, but hey, can't keep it to myself. One thing I think we have lost though going to this new mechanism is the ability to do generics: i.e. GenericProtocol over in hbase-server/src/test can't be made work now. I believe this so because pb requires you specify a type: https://developers.google.com/protocol-buffers/docs/proto#simple Do you agree G? I think this fine. We just need to doc. it in ref notes above I'd say and be done w/ it.
        Hide
        ghelmling Gary Helmling added a comment -

        Opened HBASE-7397 to fix.

        Show
        ghelmling Gary Helmling added a comment - Opened HBASE-7397 to fix.
        Hide
        ghelmling Gary Helmling added a comment -

        Stack That seems an unintentional bug on my part. I recall that the coprocessorExec() synchronization issue was being fixed at the same time I was working on this. I might have missed the final resolution. I probably chose CSLM for "performance" but missed the change in semantics with disallowing null. With the switch to PB services, you should not get back null from an endpoint (except with an exception), but that doesn't mean the Batch.Call implementation wouldn't return null.

        Show
        ghelmling Gary Helmling added a comment - Stack That seems an unintentional bug on my part. I recall that the coprocessorExec() synchronization issue was being fixed at the same time I was working on this. I might have missed the final resolution. I probably chose CSLM for "performance" but missed the change in semantics with disallowing null. With the switch to PB services, you should not get back null from an endpoint (except with an exception), but that doesn't mean the Batch.Call implementation wouldn't return null.
        Hide
        stack stack added a comment -

        Gary Helmling Mighty Gary, why ConcurrentSkipListMap in the below in HTable#coprocessorService rather than what was used previously up in HTable#coprocessorExec, a TreeMap wrapped in a Collections.synchronizedMap? The former does not allow values of null whereas the latter does. There is a test in TestServerCustomProtocol that tests we can get back null. It fails if I try to return null because of the above change. Here is the code I refer to:

        	
          public <T extends Service, R> Map<byte[],R> coprocessorService(final Class<T> service,
        1389	
              byte[] startKey, byte[] endKey, final Batch.Call<T,R> callable)
        1390	
              throws ServiceException, Throwable {
        1391	
            final Map<byte[],R> results =  new ConcurrentSkipListMap<byte[], R>(Bytes.BYTES_COMPARATOR);
        1392	
            coprocessorService(service, startKey, endKey, callable, new Batch.Callback<R>() {
        1393	
              public void update(byte[] region, byte[] row, R value) {
        1394	
                if (value == null) {
        1395	
                  if (LOG.isDebugEnabled()) {
        1396	
                    LOG.debug("Call to " + service.getName() +
        1397	
                        " received NULL value from Batch.Call for region " + Bytes.toStringBinary(region));
        1398	
                  }
        1399	
                } else {
        1400	
                  results.put(region, value);
        1401	
                }
        1402	
              }
        1403	
            });
        1404	
            return results;
        1405	
          }
        

        Wondering if an explicit reason that I am not aware of (There maybe given you seem to go out of your way to not return nulls though it seems simple enough to go back to the old way of doing nulls).

        Thanks boss.

        Show
        stack stack added a comment - Gary Helmling Mighty Gary, why ConcurrentSkipListMap in the below in HTable#coprocessorService rather than what was used previously up in HTable#coprocessorExec, a TreeMap wrapped in a Collections.synchronizedMap? The former does not allow values of null whereas the latter does. There is a test in TestServerCustomProtocol that tests we can get back null. It fails if I try to return null because of the above change. Here is the code I refer to: public <T extends Service, R> Map< byte [],R> coprocessorService( final Class <T> service, 1389 byte [] startKey, byte [] endKey, final Batch.Call<T,R> callable) 1390 throws ServiceException, Throwable { 1391 final Map< byte [],R> results = new ConcurrentSkipListMap< byte [], R>(Bytes.BYTES_COMPARATOR); 1392 coprocessorService(service, startKey, endKey, callable, new Batch.Callback<R>() { 1393 public void update( byte [] region, byte [] row, R value) { 1394 if (value == null ) { 1395 if (LOG.isDebugEnabled()) { 1396 LOG.debug( "Call to " + service.getName() + 1397 " received NULL value from Batch.Call for region " + Bytes.toStringBinary(region)); 1398 } 1399 } else { 1400 results.put(region, value); 1401 } 1402 } 1403 }); 1404 return results; 1405 } Wondering if an explicit reason that I am not aware of (There maybe given you seem to go out of your way to not return nulls though it seems simple enough to go back to the old way of doing nulls). Thanks boss.
        Hide
        hudson Hudson added a comment -

        Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #179 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/179/)
        HBASE-5448 Support dynamic coprocessor endpoints with protobuf based RPC (Revision 1387001)

        Result = FAILURE
        garyh :
        Files :

        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTable.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTablePool.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/Batch.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/Exec.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/ExecResult.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/package-info.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorService.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/example/generated
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/example/generated/ExampleProtos.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/package-info.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcCallback.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CoprocessorProtocol.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CoprocessorRpcChannel.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ExecRPCInvoker.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ServerRpcController.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ResponseConverter.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AccessControlProtos.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
        • /hbase/trunk/hbase-server/src/main/protobuf/AccessControl.proto
        • /hbase/trunk/hbase-server/src/main/protobuf/Client.proto
        • /hbase/trunk/hbase-server/src/main/protobuf/Examples.proto
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/ColumnAggregationEndpoint.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/ProtobufCoprocessorService.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestRowCountEndpoint.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
        Show
        hudson Hudson added a comment - Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #179 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/179/ ) HBASE-5448 Support dynamic coprocessor endpoints with protobuf based RPC (Revision 1387001) Result = FAILURE garyh : Files : /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTable.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTablePool.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/Batch.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/Exec.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/ExecResult.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/package-info.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorService.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/example/generated /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/example/generated/ExampleProtos.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/package-info.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcCallback.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CoprocessorProtocol.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CoprocessorRpcChannel.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ExecRPCInvoker.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ServerRpcController.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ResponseConverter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AccessControlProtos.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java /hbase/trunk/hbase-server/src/main/protobuf/AccessControl.proto /hbase/trunk/hbase-server/src/main/protobuf/Client.proto /hbase/trunk/hbase-server/src/main/protobuf/Examples.proto /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/ColumnAggregationEndpoint.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/ProtobufCoprocessorService.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestRowCountEndpoint.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
        Hide
        hudson Hudson added a comment -

        Integrated in HBase-TRUNK #3345 (See https://builds.apache.org/job/HBase-TRUNK/3345/)
        HBASE-5448 Support dynamic coprocessor endpoints with protobuf based RPC (Revision 1387001)

        Result = FAILURE
        garyh :
        Files :

        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTable.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTablePool.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/Batch.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/Exec.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/ExecResult.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/package-info.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorService.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/example/generated
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/example/generated/ExampleProtos.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/package-info.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcCallback.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CoprocessorProtocol.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CoprocessorRpcChannel.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ExecRPCInvoker.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ServerRpcController.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ResponseConverter.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AccessControlProtos.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
        • /hbase/trunk/hbase-server/src/main/protobuf/AccessControl.proto
        • /hbase/trunk/hbase-server/src/main/protobuf/Client.proto
        • /hbase/trunk/hbase-server/src/main/protobuf/Examples.proto
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/ColumnAggregationEndpoint.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/ProtobufCoprocessorService.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestRowCountEndpoint.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
        Show
        hudson Hudson added a comment - Integrated in HBase-TRUNK #3345 (See https://builds.apache.org/job/HBase-TRUNK/3345/ ) HBASE-5448 Support dynamic coprocessor endpoints with protobuf based RPC (Revision 1387001) Result = FAILURE garyh : Files : /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTable.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTablePool.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/Batch.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/Exec.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/ExecResult.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/coprocessor/package-info.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorService.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/example/generated /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/example/generated/ExampleProtos.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/package-info.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcCallback.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CoprocessorProtocol.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CoprocessorRpcChannel.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ExecRPCInvoker.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ServerRpcController.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ResponseConverter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AccessControlProtos.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java /hbase/trunk/hbase-server/src/main/protobuf/AccessControl.proto /hbase/trunk/hbase-server/src/main/protobuf/Client.proto /hbase/trunk/hbase-server/src/main/protobuf/Examples.proto /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/ColumnAggregationEndpoint.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/ProtobufCoprocessorService.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestRowCountEndpoint.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12545530/HBASE-5448_4.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 16 new or modified tests.

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2887//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12545530/HBASE-5448_4.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 16 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2887//console This message is automatically generated.
        Hide
        ghelmling Gary Helmling added a comment -

        Committed patch to trunk. Thanks for the reviews Ted, Stack and Andy.

        Show
        ghelmling Gary Helmling added a comment - Committed patch to trunk. Thanks for the reviews Ted, Stack and Andy.
        Hide
        ghelmling Gary Helmling added a comment -

        Attaching final patch committed to trunk. Only changes from previous are minor javadoc tweaks and JUnit rule line from Ted's review.

        Show
        ghelmling Gary Helmling added a comment - Attaching final patch committed to trunk. Only changes from previous are minor javadoc tweaks and JUnit rule line from Ted's review.
        Hide
        yuzhihong@gmail.com Ted Yu added a comment -

        @Gary:
        Totally fine with integration.

        It would be nice to see the performance difference between your rowcounter service-based coprocessor example and the map/reduce job example.

        Show
        yuzhihong@gmail.com Ted Yu added a comment - @Gary: Totally fine with integration. It would be nice to see the performance difference between your rowcounter service-based coprocessor example and the map/reduce job example.
        Hide
        ghelmling Gary Helmling added a comment -

        Thanks for the reviews guys.

        Stack TestReplication is timing out for me locally both on current trunk and on my branch, so don't think it's anything specific to this change.

        Ted Yu I'll fix up your additional comments on commit if that's okay with you. Have local changes for them already.

        Show
        ghelmling Gary Helmling added a comment - Thanks for the reviews guys. Stack TestReplication is timing out for me locally both on current trunk and on my branch, so don't think it's anything specific to this change. Ted Yu I'll fix up your additional comments on commit if that's okay with you. Have local changes for them already.
        Hide
        apurtell Andrew Purtell added a comment -

        Big thanks for adding the rowcounter service-based coprocessor example Gary. Looks good to me. I see the follow up JIRAs also, makes sense. +1 for commit.

        Show
        apurtell Andrew Purtell added a comment - Big thanks for adding the rowcounter service-based coprocessor example Gary. Looks good to me. I see the follow up JIRAs also, makes sense. +1 for commit.
        Hide
        stack stack added a comment -

        Gary Helmling Does the TestReplication pass for you Gary locally? I'd doubt your patch the problem.

        I took a quick look at patch. package doc is great. +1 on commit.

        Show
        stack stack added a comment - Gary Helmling Does the TestReplication pass for you Gary locally? I'd doubt your patch the problem. I took a quick look at patch. package doc is great. +1 on commit.
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12545173/HBASE-5448_3.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 16 new or modified tests.

        +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        -1 javac. The patch appears to cause mvn compile goal to fail.

        -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.replication.TestReplication

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2871//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2871//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12545173/HBASE-5448_3.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 16 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The patch appears to cause mvn compile goal to fail. -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.replication.TestReplication Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2871//testReport/ Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2871//console This message is automatically generated.
        Hide
        ghelmling Gary Helmling added a comment -

        New patch updating package javadoc for org.apache.hadoop.hbase.coprocessor and org.apache.hadoop.hbase.client.coprocessor, addressing some review comments, and adding an example coprocessor Service implementation (RowCountEndpoint).

        Show
        ghelmling Gary Helmling added a comment - New patch updating package javadoc for org.apache.hadoop.hbase.coprocessor and org.apache.hadoop.hbase.client.coprocessor, addressing some review comments, and adding an example coprocessor Service implementation (RowCountEndpoint).
        Hide
        stack stack added a comment -

        Gary Helmling Those tests commonly fail on trunk. Doubt it your patch. I'd be +1 on commit Gary. Can address stuff like removing all Writable references in new JIRAs? (Could address the Ted comments on commit?)

        Show
        stack stack added a comment - Gary Helmling Those tests commonly fail on trunk. Doubt it your patch. I'd be +1 on commit Gary. Can address stuff like removing all Writable references in new JIRAs? (Could address the Ted comments on commit?)
        Hide
        yuzhihong@gmail.com Ted Yu added a comment -

        I left a few minor comments on:
        https://reviews.apache.org/r/7000/

        Show
        yuzhihong@gmail.com Ted Yu added a comment - I left a few minor comments on: https://reviews.apache.org/r/7000/
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12544789/HBASE-5448_2.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 14 new or modified tests.

        +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        -1 javac. The patch appears to cause mvn compile goal to fail.

        -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.replication.TestReplication
        org.apache.hadoop.hbase.master.TestSplitLogManager

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2850//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2850//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12544789/HBASE-5448_2.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 14 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The patch appears to cause mvn compile goal to fail. -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.replication.TestReplication org.apache.hadoop.hbase.master.TestSplitLogManager Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2850//testReport/ Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2850//console This message is automatically generated.
        Hide
        ghelmling Gary Helmling added a comment -

        Updating patch to rebase against current trunk and address the review comments so far.

        Show
        ghelmling Gary Helmling added a comment - Updating patch to rebase against current trunk and address the review comments so far.
        Hide
        yuzhihong@gmail.com Ted Yu added a comment -

        @Gary:
        Do you mind rebasing your patch ?
        Thanks

        Show
        yuzhihong@gmail.com Ted Yu added a comment - @Gary: Do you mind rebasing your patch ? Thanks
        Hide
        ghelmling Gary Helmling added a comment -

        ... should that be back to the client in the above or do you mean passing exceptions from client to server?

        For the current implementation, I mean passing exceptions back up from coprocessor endpoint Service implementations, back up to HRegionServer.execService() where it can be re-thrown as a ServiceException (and sent back to the client by the RPC server). See AccessController.grant() for an example:

             try {
              grant(perm);
              response = AccessControlProtos.GrantResponse.getDefaultInstance();
            } catch (IOException ioe) {
              // pass exception back up
              ResponseConverter.setControllerException(controller, ioe);
            }
        

        The PB-generated Service sub-class only defines the async/callback based methods which do not throw ServiceException, so ServerRpcController provides an alternate (though maybe not so natural) channel for communicating the exception. But if this is useful on the server-side maybe it would be useful in client-side invocations as well?

        Same rpccontroller for dynamic cp endpoints or you mean in general (Sorry if dumb question – I don't know what the rpccontroller thing is about.. need to go read up).

        Same ServerRpcController for clients invoking dynamic CP endpoints (if that's useful), and a bit of doc around that.

        The bit of code pasted looks reasonable (I was going to say 'natural' but that'd be going too far... I think you know what I mean... when folks see a pb service, they know they got some building to do....)

        Yes, agree it's not quite as transparent as the proxy-based approach for CoprocessorProtocol, but I don't see any easy way other than exposing the PB Messages and Service directly.

        On #1, above, that means we can't just drop Writable in the rpc for 0.96 because you are thinking of supporting old and new in 0.96? (... I should go back to the dev list and make sure we are all good w/ this... but that is my understanding).

        Yes, that's true. Deprecating would keep the current Exec/ExecResult and required handling of Writable params in place for 0.96. Since the way we expose CP endpoints is an aspect of client interface, that seemed like the fairest approach. Though if we poll the dev-list and there are no major objections, I could see making an exception to be able to phase out all Writable usage. In the very short-term, keeping both paths in place will allow switching over current CP endpoints piece-by-piece without breakage.

        Show
        ghelmling Gary Helmling added a comment - ... should that be back to the client in the above or do you mean passing exceptions from client to server? For the current implementation, I mean passing exceptions back up from coprocessor endpoint Service implementations, back up to HRegionServer.execService() where it can be re-thrown as a ServiceException (and sent back to the client by the RPC server). See AccessController.grant() for an example: try { grant(perm); response = AccessControlProtos.GrantResponse.getDefaultInstance(); } catch (IOException ioe) { // pass exception back up ResponseConverter.setControllerException(controller, ioe); } The PB-generated Service sub-class only defines the async/callback based methods which do not throw ServiceException, so ServerRpcController provides an alternate (though maybe not so natural) channel for communicating the exception. But if this is useful on the server-side maybe it would be useful in client-side invocations as well? Same rpccontroller for dynamic cp endpoints or you mean in general (Sorry if dumb question – I don't know what the rpccontroller thing is about.. need to go read up). Same ServerRpcController for clients invoking dynamic CP endpoints (if that's useful), and a bit of doc around that. The bit of code pasted looks reasonable (I was going to say 'natural' but that'd be going too far... I think you know what I mean... when folks see a pb service, they know they got some building to do....) Yes, agree it's not quite as transparent as the proxy-based approach for CoprocessorProtocol, but I don't see any easy way other than exposing the PB Messages and Service directly. On #1, above, that means we can't just drop Writable in the rpc for 0.96 because you are thinking of supporting old and new in 0.96? (... I should go back to the dev list and make sure we are all good w/ this... but that is my understanding). Yes, that's true. Deprecating would keep the current Exec/ExecResult and required handling of Writable params in place for 0.96. Since the way we expose CP endpoints is an aspect of client interface, that seemed like the fairest approach. Though if we poll the dev-list and there are no major objections, I could see making an exception to be able to phase out all Writable usage. In the very short-term, keeping both paths in place will allow switching over current CP endpoints piece-by-piece without breakage.
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12544583/HBASE-5448.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 14 new or modified tests.

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2846//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12544583/HBASE-5448.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 14 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2846//console This message is automatically generated.
        Hide
        stack stack added a comment -

        Good on you Gary (Was afraid to ask about this one....its gnarly).

        I debated for a while ...But it seems simpler to actually embrace that aspect of the generated code and pass through any provided RpcController where possible.

        Sounds reasonable G.

        I created an o.a.h.h.ipc.ServerRpcController implementation to handle the RegionCoprocessorHost endpoint invocations. This facilitates passing back exceptions to the RPC server.

        ... should that be back to the client in the above or do you mean passing exceptions from client to server?

        ... but it may make sense to genericize this a bit more and allow clients to make use of the same implementation.

        Same rpccontroller for dynamic cp endpoints or you mean in general (Sorry if dumb question – I don't know what the rpccontroller thing is about.. need to go read up).

        The bit of code pasted looks reasonable (I was going to say 'natural' but that'd be going too far... I think you know what I mean... when folks see a pb service, they know they got some building to do....)

        Implementors will be going directly against the pb messages. Thats probably fine. If important to them, they can do the work hiding the pb and doing the transforms in layers of their own.

        On #1, above, that means we can't just drop Writable in the rpc for 0.96 because you are thinking of supporting old and new in 0.96? (0.96 is the singularity; unless someone screams otherwise, we can do stuff we wouldn't ever do over a major release – I should go back to the dev list and make sure we are all good w/ this... but that is my understanding).

        On new issues for convertion of old cp-based implementations, that sounds good.

        Let me take a look at the patch now....

        Show
        stack stack added a comment - Good on you Gary (Was afraid to ask about this one....its gnarly). I debated for a while ...But it seems simpler to actually embrace that aspect of the generated code and pass through any provided RpcController where possible. Sounds reasonable G. I created an o.a.h.h.ipc.ServerRpcController implementation to handle the RegionCoprocessorHost endpoint invocations. This facilitates passing back exceptions to the RPC server. ... should that be back to the client in the above or do you mean passing exceptions from client to server? ... but it may make sense to genericize this a bit more and allow clients to make use of the same implementation. Same rpccontroller for dynamic cp endpoints or you mean in general (Sorry if dumb question – I don't know what the rpccontroller thing is about.. need to go read up). The bit of code pasted looks reasonable (I was going to say 'natural' but that'd be going too far... I think you know what I mean... when folks see a pb service, they know they got some building to do....) Implementors will be going directly against the pb messages. Thats probably fine. If important to them, they can do the work hiding the pb and doing the transforms in layers of their own. On #1, above, that means we can't just drop Writable in the rpc for 0.96 because you are thinking of supporting old and new in 0.96? (0.96 is the singularity; unless someone screams otherwise, we can do stuff we wouldn't ever do over a major release – I should go back to the dev list and make sure we are all good w/ this... but that is my understanding). On new issues for convertion of old cp-based implementations, that sounds good. Let me take a look at the patch now....
        Hide
        ghelmling Gary Helmling added a comment -

        Here's a preliminary patch for discussion. The intent here is that all coprocessor endpoints be defined and callable as protobuf services, so that we no longer have endpoint invocations passed through as Writable blobs.

        For coprocessor implementors, this means doing the following:

        1. Define the coprocessor service and related messages in a .proto file
        2. Run protoc to generate code
        3. Write code implementing:
          • the generated protobuf Service interface
          • the new org.apache.hadoop.hbase.coprocessor.CoprocessorService interface (required for the RegionCoprocessorHost to register the exposed service)

        With this done, clients can then call the new HTable.coprocessorService() methods to perform the endpoint RPCs. This basically mirrors the current HTable.coprocessorProxy() and HTable.coprocessorExec() calls.

        I debated for a while how we could hide away details of the protobuf generated service code from clients (i.e. Service methods take an RpcController instance which isn't used in the HBase client). But it seems simpler to actually embrace that aspect of the generated code and pass through any provided RpcController where possible. As part
        of this patch, I created an o.a.h.h.ipc.ServerRpcController implementation to handle the RegionCoprocessorHost endpoint invocations. This facilitates passing back exceptions to the RPC server. But it may make sense to genericize this a bit more and allow clients to make use of the same implementation.

        For clients, invocations now look something like:

        HTable myTable = new HTable(conf, "mytable");
        CoprocessorRpcChannel channel = myTable.coprocessorService(rowkey);
        MyService.BlockingInterface service = MyService.newBlockingStub(channel);
        
        MyCallRequest request = MyCallRequest.newBuilder()
             ...
             .build();
        MyCallResponse response = service.myCall(null, request);
        

        Major changes in this patch are:

        1. CoprocessorProtocol and the associated HTable client methods (coprocessorProxy, coprocessorExec) are now deprecated. The current implementation allows both CoprocessorProxy and CoprocessorService
          implementations to be registered side-by-side (even for the same class), so we can have a full major-release cycle before dropping these.
        2. AccessController now defines and registers a PB-based Service. TestAccessController is mostly converted to using this implementation in place of AccessControllerProtocol.
        3. HRegionServer and HRegion define new execService() methods to handle the invocations

        This still needs some additional documentation and clean up, but I wanted to get it out for some feedback on the direction for the exposed client interface.

        Additional work to do:

        1. Convert other bundled CoprocessorProtocol-based implementations to PB-based services. I think it's best to handle these in follow up issues.
        2. Some extra doc on the added classes
        3. Update org.apache.hadoop.hbase.coprocessor package javadoc with new usage
        Show
        ghelmling Gary Helmling added a comment - Here's a preliminary patch for discussion. The intent here is that all coprocessor endpoints be defined and callable as protobuf services, so that we no longer have endpoint invocations passed through as Writable blobs. For coprocessor implementors, this means doing the following: Define the coprocessor service and related messages in a .proto file Run protoc to generate code Write code implementing: the generated protobuf Service interface the new org.apache.hadoop.hbase.coprocessor.CoprocessorService interface (required for the RegionCoprocessorHost to register the exposed service) With this done, clients can then call the new HTable.coprocessorService() methods to perform the endpoint RPCs. This basically mirrors the current HTable.coprocessorProxy() and HTable.coprocessorExec() calls. I debated for a while how we could hide away details of the protobuf generated service code from clients (i.e. Service methods take an RpcController instance which isn't used in the HBase client). But it seems simpler to actually embrace that aspect of the generated code and pass through any provided RpcController where possible. As part of this patch, I created an o.a.h.h.ipc.ServerRpcController implementation to handle the RegionCoprocessorHost endpoint invocations. This facilitates passing back exceptions to the RPC server. But it may make sense to genericize this a bit more and allow clients to make use of the same implementation. For clients, invocations now look something like: HTable myTable = new HTable(conf, "mytable" ); CoprocessorRpcChannel channel = myTable.coprocessorService(rowkey); MyService.BlockingInterface service = MyService.newBlockingStub(channel); MyCallRequest request = MyCallRequest.newBuilder() ... .build(); MyCallResponse response = service.myCall( null , request); Major changes in this patch are: CoprocessorProtocol and the associated HTable client methods (coprocessorProxy, coprocessorExec) are now deprecated. The current implementation allows both CoprocessorProxy and CoprocessorService implementations to be registered side-by-side (even for the same class), so we can have a full major-release cycle before dropping these. AccessController now defines and registers a PB-based Service. TestAccessController is mostly converted to using this implementation in place of AccessControllerProtocol. HRegionServer and HRegion define new execService() methods to handle the invocations This still needs some additional documentation and clean up, but I wanted to get it out for some feedback on the direction for the exposed client interface. Additional work to do: Convert other bundled CoprocessorProtocol-based implementations to PB-based services. I think it's best to handle these in follow up issues. Some extra doc on the added classes Update org.apache.hadoop.hbase.coprocessor package javadoc with new usage
        Hide
        ghelmling Gary Helmling added a comment -

        Grabbing this, as I've been thinking through some ideas of how CP implementors could expose their own protocols using protobufs. I'll get up a patch so we can discuss some of the trickier points.

        Show
        ghelmling Gary Helmling added a comment - Grabbing this, as I've been thinking through some ideas of how CP implementors could expose their own protocols using protobufs. I'll get up a patch so we can discuss some of the trickier points.

          People

          • Assignee:
            ghelmling Gary Helmling
            Reporter:
            tlipcon Todd Lipcon
          • Votes:
            0 Vote for this issue
            Watchers:
            14 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development