Details

    • Type: Sub-task
    • Status: Open
    • Priority: Minor
    • Resolution: Unresolved
    • Affects Version/s: 3.0.0-alpha4
    • Fix Version/s: None
    • Component/s: fs/s3
    • Labels:
      None

      Description

      You can pass a negative number into S3AFileSystem.putObjectDirect, which means "put until the end of the stream". S3guard has been using this len argument: it needs to be using the actual number of bytes uploaded. Also relevant with client side encryption, when the amount of data put > the amount of data in the file or stream.

      Noted in the committer branch after I added some more assertions, I've changed it there so making changes to S3AFS.putObjectDirect to pull the content length to pass in to finishedWrite() from the PutObjectResult instead. This can be picked into the s3guard branch

        Issue Links

          Activity

          Hide
          stevel@apache.org Steve Loughran added a comment -

          Stack which won't quite match s3guard or be reproducible there

          Running org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSECBlockOutputStream
          Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.154 sec <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSECBlockOutputStream
          testEncryption(org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSECBlockOutputStream)  Time elapsed: 0.961 sec  <<< ERROR!
          java.io.IOException: regular upload failed: java.lang.IllegalArgumentException: content length is negative
          	at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:205)
          	at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:456)
          	at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:368)
          	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
          	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
          	at org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:159)
          	at org.apache.hadoop.fs.s3a.AbstractS3ATestBase.writeThenReadFile(AbstractS3ATestBase.java:135)
          	at org.apache.hadoop.fs.s3a.AbstractTestS3AEncryption.validateEncryptionForFilesize(AbstractTestS3AEncryption.java:79)
          	at org.apache.hadoop.fs.s3a.AbstractTestS3AEncryption.testEncryption(AbstractTestS3AEncryption.java:57)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:498)
          	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
          	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
          	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
          	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
          	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
          	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
          	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
          	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          Caused by: java.lang.IllegalArgumentException: content length is negative
          	at com.google.common.base.Preconditions.checkArgument(Preconditions.java:122)
          	at org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2252)
          	at org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1354)
          	at org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$3(WriteOperationHelper.java:392)
          	at org.apache.hadoop.fs.s3a.AwsCall.execute(AwsCall.java:43)
          	at org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:390)
          	at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:439)
          	at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:432)
          	at org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
          	at org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
          	at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111)
          	at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58)
          	at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75)
          	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
          	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
          	at java.lang.Thread.run(Thread.java:745)
          
          Show
          stevel@apache.org Steve Loughran added a comment - Stack which won't quite match s3guard or be reproducible there Running org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSECBlockOutputStream Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.154 sec <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSECBlockOutputStream testEncryption(org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSECBlockOutputStream) Time elapsed: 0.961 sec <<< ERROR! java.io.IOException: regular upload failed: java.lang.IllegalArgumentException: content length is negative at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:205) at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:456) at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:368) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) at org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:159) at org.apache.hadoop.fs.s3a.AbstractS3ATestBase.writeThenReadFile(AbstractS3ATestBase.java:135) at org.apache.hadoop.fs.s3a.AbstractTestS3AEncryption.validateEncryptionForFilesize(AbstractTestS3AEncryption.java:79) at org.apache.hadoop.fs.s3a.AbstractTestS3AEncryption.testEncryption(AbstractTestS3AEncryption.java:57) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.lang.IllegalArgumentException: content length is negative at com.google.common.base.Preconditions.checkArgument(Preconditions.java:122) at org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2252) at org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1354) at org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$3(WriteOperationHelper.java:392) at org.apache.hadoop.fs.s3a.AwsCall.execute(AwsCall.java:43) at org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:390) at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:439) at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:432) at org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222) at org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang. Thread .run( Thread .java:745)
          Hide
          stevel@apache.org Steve Loughran added a comment -

          rating as minor as the output streams don't normally pass in -1 as a length

          Show
          stevel@apache.org Steve Loughran added a comment - rating as minor as the output streams don't normally pass in -1 as a length
          Hide
          stevel@apache.org Steve Loughran added a comment -

          You don't get the upload length from the PutObjectResult; the content-length it returns is the length of the response. You may get it through the progress callbacks.

          Options

          1. don't allow -1 as a length in a PUT.
          2. if a PUT passes in a stream and -1 length: do a GET afterwards to assess its length. Expensive and if overwriting an existing object, not guaranteed to be correct.
          3. use progress callbacks. This should be a consistent path for all uploads

          I'm going with option one. The only two places in which a PUT is initiated this way are: PUT at the end of a block write in BlockOutputStream; local file upload in s3guard committer. Both codepaths know the length

          Show
          stevel@apache.org Steve Loughran added a comment - You don't get the upload length from the PutObjectResult; the content-length it returns is the length of the response. You may get it through the progress callbacks. Options don't allow -1 as a length in a PUT. if a PUT passes in a stream and -1 length: do a GET afterwards to assess its length. Expensive and if overwriting an existing object, not guaranteed to be correct. use progress callbacks. This should be a consistent path for all uploads I'm going with option one. The only two places in which a PUT is initiated this way are: PUT at the end of a block write in BlockOutputStream; local file upload in s3guard committer. Both codepaths know the length
          Hide
          liuml07 Mingliang Liu added a comment -

          Options 1 seems OK to me. It's simple and most reliable.

          Show
          liuml07 Mingliang Liu added a comment - Options 1 seems OK to me. It's simple and most reliable.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          I've fixed this in HADOOP-13786; picking out the bits for s3guard direct

          Show
          stevel@apache.org Steve Loughran added a comment - I've fixed this in HADOOP-13786 ; picking out the bits for s3guard direct
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Patch 001

          this has picked in the HADOOP-13786 checks on put object length, both at the start of the put and as a final safety check in finishedWrite(). A new test catches the exception and asserts that nothing was created, that is, the first check is sufficient. The second is there to make sure that no codepath will get a -1 into the db.

          Testing: s3 frankfurt and ireland. All is well except both tests are returning 400 "bad request" trying to list/walk key user/stevel in the test .ITestS3AContractRootDir.testRecursiveRootListing(). Don't understand that, and I don't believe its related

          Show
          stevel@apache.org Steve Loughran added a comment - Patch 001 this has picked in the HADOOP-13786 checks on put object length, both at the start of the put and as a final safety check in finishedWrite() . A new test catches the exception and asserts that nothing was created, that is, the first check is sufficient. The second is there to make sure that no codepath will get a -1 into the db. Testing: s3 frankfurt and ireland. All is well except both tests are returning 400 "bad request" trying to list/walk key user/stevel in the test .ITestS3AContractRootDir.testRecursiveRootListing() . Don't understand that, and I don't believe its related
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 21s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          -1 mvninstall 16m 54s root in HADOOP-13345 failed.
          -1 compile 0m 7s hadoop-aws in HADOOP-13345 failed.
          +1 checkstyle 0m 14s HADOOP-13345 passed
          -1 mvnsite 0m 7s hadoop-aws in HADOOP-13345 failed.
          -1 findbugs 0m 7s hadoop-aws in HADOOP-13345 failed.
          -1 javadoc 0m 8s hadoop-aws in HADOOP-13345 failed.
          -1 mvninstall 0m 6s hadoop-aws in the patch failed.
          -1 compile 0m 5s hadoop-aws in the patch failed.
          -1 javac 0m 5s hadoop-aws in the patch failed.
          +1 checkstyle 0m 11s the patch passed
          -1 mvnsite 0m 6s hadoop-aws in the patch failed.
          +1 whitespace 0m 0s The patch has no whitespace issues.
          -1 findbugs 0m 5s hadoop-aws in the patch failed.
          -1 javadoc 0m 7s hadoop-aws in the patch failed.
          -1 unit 0m 5s hadoop-aws in the patch failed.
          +1 asflicense 0m 17s The patch does not generate ASF License warnings.
          19m 57s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HADOOP-14423
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12874533/HADOOP-14423-HADOOP-13345-001.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux d5a42cfb6e8e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision HADOOP-13345 / 2b3c4b8
          Default Java 1.8.0_131
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/branch-mvninstall-root.txt
          compile https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/branch-compile-hadoop-tools_hadoop-aws.txt
          mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/branch-mvnsite-hadoop-tools_hadoop-aws.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aws.txt
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/branch-javadoc-hadoop-tools_hadoop-aws.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aws.txt
          compile https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-compile-hadoop-tools_hadoop-aws.txt
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-compile-hadoop-tools_hadoop-aws.txt
          mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-mvnsite-hadoop-tools_hadoop-aws.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-findbugs-hadoop-tools_hadoop-aws.txt
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-javadoc-hadoop-tools_hadoop-aws.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-aws.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/testReport/
          modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 21s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. -1 mvninstall 16m 54s root in HADOOP-13345 failed. -1 compile 0m 7s hadoop-aws in HADOOP-13345 failed. +1 checkstyle 0m 14s HADOOP-13345 passed -1 mvnsite 0m 7s hadoop-aws in HADOOP-13345 failed. -1 findbugs 0m 7s hadoop-aws in HADOOP-13345 failed. -1 javadoc 0m 8s hadoop-aws in HADOOP-13345 failed. -1 mvninstall 0m 6s hadoop-aws in the patch failed. -1 compile 0m 5s hadoop-aws in the patch failed. -1 javac 0m 5s hadoop-aws in the patch failed. +1 checkstyle 0m 11s the patch passed -1 mvnsite 0m 6s hadoop-aws in the patch failed. +1 whitespace 0m 0s The patch has no whitespace issues. -1 findbugs 0m 5s hadoop-aws in the patch failed. -1 javadoc 0m 7s hadoop-aws in the patch failed. -1 unit 0m 5s hadoop-aws in the patch failed. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 19m 57s Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14423 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12874533/HADOOP-14423-HADOOP-13345-001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d5a42cfb6e8e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision HADOOP-13345 / 2b3c4b8 Default Java 1.8.0_131 mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/branch-mvninstall-root.txt compile https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/branch-compile-hadoop-tools_hadoop-aws.txt mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/branch-mvnsite-hadoop-tools_hadoop-aws.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aws.txt javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/branch-javadoc-hadoop-tools_hadoop-aws.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aws.txt compile https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-compile-hadoop-tools_hadoop-aws.txt javac https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-compile-hadoop-tools_hadoop-aws.txt mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-mvnsite-hadoop-tools_hadoop-aws.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-findbugs-hadoop-tools_hadoop-aws.txt javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-javadoc-hadoop-tools_hadoop-aws.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-aws.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/testReport/ modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12627/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          depends on the build fix before patches will compile

          Show
          stevel@apache.org Steve Loughran added a comment - depends on the build fix before patches will compile

            People

            • Assignee:
              stevel@apache.org Steve Loughran
              Reporter:
              stevel@apache.org Steve Loughran
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:

                Development