Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-11487

FileNotFound on distcp to s3n/s3a due to creation inconsistency

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 2.7.2
    • 2.8.0
    • fs, fs/s3
    • None

    Description

      I'm trying to copy a large amount of files from HDFS to S3 via distcp and I'm getting the following exception:

      2015-01-16 20:53:18,187 ERROR [main] org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying hdfs://10.165.35.216/hdfsFolder/file.gz to s3n://s3-bucket/file.gz
      java.io.FileNotFoundException: No such file or directory 's3n://s3-bucket/file.gz'
      	at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445)
      	at org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187)
      	at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233)
      	at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
      	at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
      	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
      	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
      	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at javax.security.auth.Subject.doAs(Subject.java:422)
      	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
      	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
      2015-01-16 20:53:18,276 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.FileNotFoundException: No such file or directory 's3n://s3-bucket/file.gz'
      	at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445)
      	at org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187)
      	at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233)
      	at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
      	at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
      	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
      	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
      	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at javax.security.auth.Subject.doAs(Subject.java:422)
      	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
      	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
      

      However, when I try hadoop fs -ls s3n://s3-bucket/file.gz the file is there. So probably due to Amazon's S3 eventual consistency the job failure.

      In my opinion, in order to fix this problem NativeS3FileSystem.getFileStatus must use fs.s3.maxRetries property in order to avoid failures like this.

      Attachments

        Issue Links

          Activity

            People

              jzhuge John Zhuge
              pauloricardomg Paulo Motta
              Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: