Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
Description
In current HA mechanism with FailoverProxyProvider and non HA setups with RetryProxy retry a request from the RPC layer. If the retried request has already been processed at the namenode, the subsequent attempts fail for non-idempotent operations such as create, append, delete, rename etc. This will cause application failures during HA failover, network issues etc.
This jira proposes adding retry cache at the namenode to handle these failures. More details in the comments.
Attachments
Attachments
Issue Links
- incorporates
-
HADOOP-9688 Add globally unique Client ID to RPC requests
- Closed
-
HADOOP-9716 Move the Rpc request call ID generation to client side InvocationHandler
- Closed
-
HADOOP-9717 Add retry attempt count to the RPC requests
- Closed
-
HADOOP-9691 RPC clients can generate call ID using AtomicInteger instead of synchronizing on the Client instance.
- Closed
- is related to
-
HDFS-4849 Enable retries for create and append operations.
- Resolved
- relates to
-
HADOOP-9786 RetryInvocationHandler#isRpcInvocation should support ProtocolTranslator
- Closed
-
HDFS-5008 Make ClientProtocol#abandonBlock() idempotent
- Closed