-
Type:
Sub-task
-
Status: Resolved
-
Priority:
Major
-
Resolution: Fixed
-
Affects Version/s: 2.6.0
-
Fix Version/s: 2.9.0, 3.0.0-alpha4, 2.8.6
-
Component/s: fs/s3
-
Labels:None
-
Target Version/s:
Reviewing the code, s3a has the problem raised in HADOOP-6688: deletion of a child entry during a recursive directory delete is propagated as an exception, rather than ignored as a detail which idempotent operations should just ignore.
the exception should be caught and, if a file not found problem, logged rather than propagated
- is duplicated by
-
HADOOP-14101 MultiObjectDeleteException thrown when writing directly to s3a
-
- Resolved
-
-
HADOOP-14239 S3A Retry Multiple S3 Key Deletion
-
- Resolved
-
- is related to
-
HADOOP-14303 Review retry logic on all S3 SDK calls, implement where needed
-
- Resolved
-
-
HADOOP-6688 FileSystem.delete(...) implementations should not throw FileNotFoundException
-
- Resolved
-