We could create a thrift method to take the name of the class, method, and an array of params and then call coprocessorExec.
This sounds like a reasonable short term thing to do.
For now with the dynamic behaviors of the current HRPC based stack we can mostly get away with using the same Java client tools for flexible remote method invocation of Endpoints as with the core interfaces. In the future the fact every Endpoint is really its own little protocol may be more exposed. In this world, the interface passes a blob. Such blobs could recursively contain protobuf (or Thrift) encoding.
If going forward we will support Thrift and protobuf ("new RPC") clients both, then maybe we can expect server side code will translate from Thrift and protobuf message representations to some common representation, POJO or whatever. In other words, rehydrate from message representation into real classes (via reflection?) At least for Java, protobufs documentation recommends the objects built by the protobuf unmarshaller not be used directly as application classes. I think Thrift has the same practice. So on the server side that might not be so bad.
On the client side, given the static nature of Thrift and protobuf message schemas (compiled from IDL) we can't dynamically create messages, so there's no way to hide behind for example a remote invocation proxy or some message builder. It could be different if we used Avro or some other option which can create message schemas at runtime and use those dynamically generated schemas server side.