Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.6.0
-
None
Description
Reviewing the code, s3a has the problem raised in HADOOP-6688: deletion of a child entry during a recursive directory delete is propagated as an exception, rather than ignored as a detail which idempotent operations should just ignore.
the exception should be caught and, if a file not found problem, logged rather than propagated
Attachments
Attachments
Issue Links
- is duplicated by
-
HADOOP-14101 MultiObjectDeleteException thrown when writing directly to s3a
-
- Resolved
-
-
HADOOP-14239 S3A Retry Multiple S3 Key Deletion
-
- Resolved
-
- is related to
-
HADOOP-14303 Review retry logic on all S3 SDK calls, implement where needed
-
- Resolved
-
-
HADOOP-6688 FileSystem.delete(...) implementations should not throw FileNotFoundException
-
- Resolved
-