Description
Following the S3AFileSystem integration patch in HADOOP-13651, we need to add retry logic.
In HADOOP-13651, I added TODO comments in most of the places retry loops are needed, including:
- open(path). If MetadataStore reflects recent create/move of file path, but we fail to read it from S3, retry.
- delete(path). If deleteObject() on S3 fails, but MetadataStore shows the file exists, retry.
- rename(src,dest). If source path is not visible in S3 yet, retry.
- listFiles(). Skip for now. Not currently implemented in S3Guard. I will create a separate JIRA for this as it will likely require interface changes (i.e. prefix or subtree scan).
We may miss some cases initially and we should do failure injection testing to make sure we're covered. Failure injection tests can be a separate JIRA to make this easier to review.
We also need basic configuration parameters around retry policy. There should be a way to specify maximum retry duration, as some applications would prefer to receive an error eventually, than waiting indefinitely. We should also be keeping statistics when inconsistency is detected and we enter a retry loop.
Attachments
Attachments
Issue Links
- blocks
-
HADOOP-14576 s3guard DynamoDB resource not found: tables not ACTIVE state after initial connection
- Resolved
- contains
-
HADOOP-15216 S3AInputStream to handle reconnect on read() failure better
- Resolved
- depends upon
-
HADOOP-13651 S3Guard: S3AFileSystem Integration with MetadataStore
- Resolved
-
HADOOP-13786 Add S3A committers for zero-rename commits to S3 endpoints
- Resolved
- incorporates
-
HADOOP-15035 S3Guard to perform retry and translation of exceptions
- Resolved
- is duplicated by
-
HADOOP-14012 Handled dynamo exceptions in translateException
- Resolved
-
HADOOP-14810 S3Guard: handle provisioning failure through backoff & retry (& metrics)
- Resolved
- is related to
-
HADOOP-15426 Make S3guard client resilient to DDB throttle events and network failures
- Resolved