Description
I'm trying to copy a large amount of files from HDFS to S3 via distcp and I'm getting the following exception:
2015-01-16 20:53:18,187 ERROR [main] org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying hdfs://10.165.35.216/hdfsFolder/file.gz to s3n://s3-bucket/file.gz java.io.FileNotFoundException: No such file or directory 's3n://s3-bucket/file.gz' at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) at org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) 2015-01-16 20:53:18,276 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.FileNotFoundException: No such file or directory 's3n://s3-bucket/file.gz' at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445) at org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
However, when I try hadoop fs -ls s3n://s3-bucket/file.gz the file is there. So probably due to Amazon's S3 eventual consistency the job failure.
In my opinion, in order to fix this problem NativeS3FileSystem.getFileStatus must use fs.s3.maxRetries property in order to avoid failures like this.
Attachments
Issue Links
- depends upon
-
HADOOP-13950 S3A create(path, overwrite=true) need only check for path being a dir, not a file
- Resolved
-
HADOOP-9565 Add a Blobstore interface to add to blobstore FileSystems
- Patch Available
- Is contained by
-
HADOOP-13145 In DistCp, prevent unnecessary getFileStatus call when not preserving metadata.
- Resolved
- relates to
-
HADOOP-13345 S3Guard: Improved Consistency for S3A
- Resolved
-
HADOOP-13145 In DistCp, prevent unnecessary getFileStatus call when not preserving metadata.
- Resolved