Sanjay> [RpcEngine] can help us provide 20 compatibility
I'm confused. If you switch the RPC headers to protobuf how can you provide 20 compatibility? If the protobuf headers are confined to ProtobufRpcEngine then I have no issues. I thought the proposal here is to change RPC format so that all RpcEngine implementations use protobuf headers. That would break 0.20 compatibility, no?
Sanjay> Another project can use Hadoop RPC but a different serialization then it can. (Say avro or thrift).
If we don't have multiple implementations of an extension API that are regularly used then the chances that that it's actually possible to create other implementations that work is small. The reason we removed Avro was that implementing RpcEngine alone was no longer sufficient: there were assumptions that Protobuf was used in other places in the code. RpcEngine is currently a broken abstraction, providing a false claim of plugability.
Sanjay> Doug can add his k-v pair as a parallel structure if he wishes.
I have no desire to add a k-v parallel structure. I'm just trying to help keep the RPC architecture simple and sane.
Suresh> it seems to me that all are optional except EOH
The protobuf documentation says, "Some engineers at Google have come to the conclusion that using required does more harm than good; they prefer to use only optional and repeated. However, this view is not universal." So (arguably) all or most fields should be optional.
Suresh> Documenting header - Hadoop documentation has been poor and incomplete.
Indeed. If Hadoop RPC is to be interoperable then it needs a language-independent specification and multiple implementations. That specification might specify its headers either as some protobuf messages or as some RFC-like text. Either would work fine. The default value for optional header fields could be specified in either. But what's also required is a description of how to handle the various values for these header fields, including the default. Until then, the implementation is the specification.