Details
Description
S3FileSystem.delete(Path path, boolean recursive) may fail and throw a FileNotFoundException if a directory is being deleted while at the same time some of its files are deleted in the background.
This is definitely not the expected behavior of a delete method. If one of the to-be-deleted files is found missing, the method should not fail and simply continue. This is true for the general contract of FileSystem.delete, and also for its various implementations: RawLocalFileSystem (and specifically FileUtil.fullyDelete) exhibits the same problem.
The fix is to silently catch and ignore FileNotFoundExceptions in delete loops. This can very easily be unit-tested, at least for RawLocalFileSystem.
The reason this issue bothers me is that the cleanup part of a long (Mahout) MR job inconsistently fails for me, and I think this is the root problem. The log shows:
java.io.FileNotFoundException: s3://S3-BUCKET/tmp/0008E25BF7554CA9/2521362836721872/DistributedMatrix.times.outputVector/_temporary/_attempt_201004061215_0092_r_000002_0/part-00002: No such file or directory.
at org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:334)
at org.apache.hadoop.fs.s3.S3FileSystem.listStatus(S3FileSystem.java:193)
at org.apache.hadoop.fs.s3.S3FileSystem.delete(S3FileSystem.java:303)
at org.apache.hadoop.fs.s3.S3FileSystem.delete(S3FileSystem.java:312)
at org.apache.hadoop.mapred.FileOutputCommitter.cleanupJob(FileOutputCommitter.java:64)
at org.apache.hadoop.mapred.OutputCommitter.cleanupJob(OutputCommitter.java:135)
at org.apache.hadoop.mapred.Task.runJobCleanupTask(Task.java:826)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:292)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
(similar errors are displayed for ReduceTask.run)
Attachments
Issue Links
- relates to
-
HADOOP-11572 s3a delete() operation fails during a concurrent delete of child entries
-
- Resolved
-
-
HADOOP-6631 FileUtil.fullyDelete() should continue to delete other files despite failure at any level.
-
- Closed
-