Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-12267

s3a failure due to integer overflow bug in AWS SDK

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Duplicate
    • Affects Version/s: 2.6.0
    • Fix Version/s: None
    • Component/s: fs/s3
    • Labels:
      None

      Description

      Under high load writing to Amazon AWS S3 storage, a client can be throttled enough to encounter 24 retries in a row.
      The amazon http client code (in aws-java-sdk jar) has a bug in its exponential backoff retry code, that causes integer overflow, and a call to Thread.sleep() with a negative value, which causes client to bail out with an exception (see below).

      Bug has been fixed in aws-java-sdk:

      https://github.com/aws/aws-sdk-java/pull/388

      We need to pick this up for hadoop-tools/hadoop-aws.

      Error: java.io.IOException: File copy failed: hdfs://path-redacted --> s3a://path-redacted
      at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:284)
      at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:252)
      at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
      at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
      at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
      at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
      at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:415)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
      at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://path-redacted to s3a://path-redacted
      at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
      at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:280)
      ... 10 more
      Caused by: com.amazonaws.AmazonClientException: Unable to complete transfer: timeout value is negative
      at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.unwrapExecutionException(AbstractTransfer.java:300)
      at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.rethrowExecutionException(AbstractTransfer.java:284)
      at com.amazonaws.services.s3.transfer.internal.CopyImpl.waitForCopyResult(CopyImpl.java:67)
      at org.apache.hadoop.fs.s3a.S3AFileSystem.copyFile(S3AFileSystem.java:943)
      at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:357)
      at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.promoteTmpToTarget(RetriableFileCopyCommand.java:220)
      at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:137)
      at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:100)
      at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
      ... 11 more
      Caused by: java.lang.IllegalArgumentException: timeout value is negative
      at java.lang.Thread.sleep(Native Method)
      at com.amazonaws.http.AmazonHttpClient.pauseBeforeNextRetry(AmazonHttpClient.java:864)
      at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:353)
      at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
      at com.amazonaws.services.s3.AmazonS3Client.copyObject(AmazonS3Client.java:1507)
      at com.amazonaws.services.s3.transfer.internal.CopyCallable.copyInOneChunk(CopyCallable.java:143)
      at com.amazonaws.services.s3.transfer.internal.CopyCallable.call(CopyCallable.java:131)
      at com.amazonaws.services.s3.transfer.internal.CopyMonitor.copy(CopyMonitor.java:189)
      at com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:134)
      at com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:46)
      at java.util.concurrent.FutureTask.run(FutureTask.java:262)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      at java.lang.Thread.run(Thread.java:745)

        Issue Links

          Activity

          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Hi, Aaron Fabbri As we discussed offline, we will close this as duplicated to HADOOP-12269. Lets bump aws-sdk version in both trunk and branch-2 in HADOOP-12269.

          Thanks again for this effort.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Hi, Aaron Fabbri As we discussed offline, we will close this as duplicated to HADOOP-12269 . Lets bump aws-sdk version in both trunk and branch-2 in HADOOP-12269 . Thanks again for this effort.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Hi, Aaron Fabbri. The patch itself looks good to me. Would you cooperate with Thomas Demoor and Steve Loughran regarding the version of aws-sdk to pull in?

          Thanks a lot for the efforts.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Hi, Aaron Fabbri . The patch itself looks good to me. Would you cooperate with Thomas Demoor and Steve Loughran regarding the version of aws-sdk to pull in? Thanks a lot for the efforts.
          Hide
          fabbri Aaron Fabbri added a comment -

          Submitting single patch for branch-2 instead Lei (Eddy) Xu

          Show
          fabbri Aaron Fabbri added a comment - Submitting single patch for branch-2 instead Lei (Eddy) Xu
          Hide
          fabbri Aaron Fabbri added a comment -

          Attached v3 patch that leaves DEFAULT_MIN_MULTIPART_THRESHOLD as the same 2^31-1 value.

          Show
          fabbri Aaron Fabbri added a comment - Attached v3 patch that leaves DEFAULT_MIN_MULTIPART_THRESHOLD as the same 2^31-1 value.
          Hide
          fabbri Aaron Fabbri added a comment -

          Tested these two v2 patches with S3, ensuring behavior is the same around the 2GB-1 boundary for fs.s3a.multipart.threshold.

          Thomas Demoor already started on patches for trunk, which will use latest-greatest aws-java-sdk.

          I think we should move forward with these patches for the 2.6.x and 2.7.x branches (fixes bugs for existing customers who can't upgrade to trunk).

          Show
          fabbri Aaron Fabbri added a comment - Tested these two v2 patches with S3, ensuring behavior is the same around the 2GB-1 boundary for fs.s3a.multipart.threshold. Thomas Demoor already started on patches for trunk, which will use latest-greatest aws-java-sdk. I think we should move forward with these patches for the 2.6.x and 2.7.x branches (fixes bugs for existing customers who can't upgrade to trunk).
          Hide
          fabbri Aaron Fabbri added a comment -

          v2 patch for branch-2.6

          Show
          fabbri Aaron Fabbri added a comment - v2 patch for branch-2.6
          Hide
          fabbri Aaron Fabbri added a comment -

          v2 patch for branch-2.7

          Show
          fabbri Aaron Fabbri added a comment - v2 patch for branch-2.7
          Hide
          Thomas Demoor Thomas Demoor added a comment -

          I have isolated the aws-sdk bump in HADOOP-12269

          Show
          Thomas Demoor Thomas Demoor added a comment - I have isolated the aws-sdk bump in HADOOP-12269
          Hide
          Thomas Demoor Thomas Demoor added a comment -

          Hi Aaron,

          in HADOOP-11684 I have bumped to 1.9.x (we have been testing this for a month now and all is well). Note that other bugs fixed in the aws-sdk (multi-part threshold from int -> long ) require some code changes in s3a.

          You will see in the comments that Steve Loughran requested to pull out the aws-sdk upgrade to a separate patch. I am doing that today, will link to the new issue then.

          Another main benefit of 1.9+ is that s3 is a separate library. We no longer need to pull in the entire sdk.

          Show
          Thomas Demoor Thomas Demoor added a comment - Hi Aaron, in HADOOP-11684 I have bumped to 1.9.x (we have been testing this for a month now and all is well). Note that other bugs fixed in the aws-sdk (multi-part threshold from int -> long ) require some code changes in s3a. You will see in the comments that Steve Loughran requested to pull out the aws-sdk upgrade to a separate patch. I am doing that today, will link to the new issue then. Another main benefit of 1.9+ is that s3 is a separate library. We no longer need to pull in the entire sdk.
          Hide
          fabbri Aaron Fabbri added a comment -

          Bug fix was backported to AWS SDK 1.7.14. Officially, only last two releases are supported by Amazon. Currently this is 1.10.x and 1.9.x.

          I suggest 1.7.14 SDK jar for 2.6.x and 2.7.x, and then moving to latest/greatest 1.10.x for trunk. Adding patches.

          I tested the patches with some basic hdfs fs s3a:// commands.

          Show
          fabbri Aaron Fabbri added a comment - Bug fix was backported to AWS SDK 1.7.14. Officially, only last two releases are supported by Amazon. Currently this is 1.10.x and 1.9.x. I suggest 1.7.14 SDK jar for 2.6.x and 2.7.x, and then moving to latest/greatest 1.10.x for trunk. Adding patches. I tested the patches with some basic hdfs fs s3a:// commands.

            People

            • Assignee:
              fabbri Aaron Fabbri
              Reporter:
              fabbri Aaron Fabbri
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development