Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-11687

Ignore x-* and response headers when copying an Amazon S3 object

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.7.0
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: fs/s3
    • Labels:
      None

      Description

      The EMC ViPR/ECS object storage platform uses proprietary headers starting by x-emc-* (like Amazon does with x-amz-*).
      Headers starting by x-emc-* should be included in the signature computation, but it's not done by the Amazon S3 Java SDK (it's done by the EMC S3 SDK).

      When s3a copy an object it copies all the headers, but when the object includes x-emc-* headers, it generates a signature mismatch.

      Removing the x-emc-* headers from the copy would allow s3a to be compatible with the EMC ViPR/ECS object storage platform.

      Removing the x-* which aren't x-amz-* headers from the copy would allow s3a to be compatible with any object storage platform which is using proprietary headers

      1. HADOOP-11687.001.patch
        2 kB
        Aaron Peterson
      2. HADOOP-11687.002.patch
        3 kB
        Harsh J
      3. HADOOP-11687.003.patch
        4 kB
        Harsh J
      4. HADOOP-11687.004.patch
        5 kB
        Harsh J

        Issue Links

          Activity

          Hide
          stevel@apache.org Steve Loughran added a comment -

          ..you were in a position to +1 it yourself, I just thought I'd stick it in to be consistent; we just didn't have an official +1 on the JIRA.

          Show
          stevel@apache.org Steve Loughran added a comment - ..you were in a position to +1 it yourself, I just thought I'd stick it in to be consistent; we just didn't have an official +1 on the JIRA.
          Hide
          qwertymaniac Harsh J added a comment -

          Oops, sorry Steve, I misunderstood the LGTM... Thanks for backporting, wasnt sure what the protocol was for adding to it.

          Show
          qwertymaniac Harsh J added a comment - Oops, sorry Steve, I misunderstood the LGTM... Thanks for backporting, wasnt sure what the protocol was for adding to it.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          I didn't explicitly put a +1 here, so for the record:

          +1

          Also I'm cherry picking to 2.8, so people can get this fix fast.

          Show
          stevel@apache.org Steve Loughran added a comment - I didn't explicitly put a +1 here, so for the record: +1 Also I'm cherry picking to 2.8, so people can get this fix fast.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9541 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9541/)
          HADOOP-11687. Ignore x-* and response headers when copying an Amazon S3 (harsh: rev 256c82fe2981748cd0befc5490d8118d139908f9)

          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
          • hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9541 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9541/ ) HADOOP-11687 . Ignore x-* and response headers when copying an Amazon S3 (harsh: rev 256c82fe2981748cd0befc5490d8118d139908f9) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
          Hide
          qwertymaniac Harsh J added a comment -

          Pushed to branch-2 and trunk. Resolving.

          Show
          qwertymaniac Harsh J added a comment - Pushed to branch-2 and trunk. Resolving.
          Hide
          qwertymaniac Harsh J added a comment -

          Ran the S3A tests a few times more just in case, they still succeed just fine. Will commit this shortly.

          Show
          qwertymaniac Harsh J added a comment - Ran the S3A tests a few times more just in case, they still succeed just fine. Will commit this shortly.
          Hide
          qwertymaniac Harsh J added a comment -

          I also did a manual "Connection: close" header inject test prior to the clone point, and it simulated the issue of HADOOP-12970 without the fix, but did not impact with the fix added.

          Show
          qwertymaniac Harsh J added a comment - I also did a manual "Connection: close" header inject test prior to the clone point, and it simulated the issue of HADOOP-12970 without the fix, but did not impact with the fix added.
          Hide
          qwertymaniac Harsh J added a comment -
          • Fixes the checkstyle warning
          Show
          qwertymaniac Harsh J added a comment - Fixes the checkstyle warning
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 11s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 6m 49s trunk passed
          +1 compile 0m 13s trunk passed with JDK v1.8.0_77
          +1 compile 0m 13s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 13s trunk passed
          +1 mvnsite 0m 19s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 0m 29s trunk passed
          +1 javadoc 0m 12s trunk passed with JDK v1.8.0_77
          +1 javadoc 0m 14s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 13s the patch passed
          +1 compile 0m 9s the patch passed with JDK v1.8.0_77
          +1 javac 0m 9s the patch passed
          +1 compile 0m 11s the patch passed with JDK v1.7.0_95
          +1 javac 0m 11s the patch passed
          -1 checkstyle 0m 12s hadoop-tools/hadoop-aws: patch generated 1 new + 38 unchanged - 0 fixed = 39 total (was 38)
          +1 mvnsite 0m 15s the patch passed
          +1 mvneclipse 0m 11s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 0m 37s the patch passed
          +1 javadoc 0m 10s the patch passed with JDK v1.8.0_77
          +1 javadoc 0m 12s the patch passed with JDK v1.7.0_95
          +1 unit 0m 10s hadoop-aws in the patch passed with JDK v1.8.0_77.
          +1 unit 0m 12s hadoop-aws in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 17s Patch does not generate ASF License warnings.
          12m 55s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12796255/HADOOP-11687.003.patch
          JIRA Issue HADOOP-11687
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 7a446162a366 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / acca149
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/8986/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8986/testReport/
          modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8986/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 6m 49s trunk passed +1 compile 0m 13s trunk passed with JDK v1.8.0_77 +1 compile 0m 13s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 13s trunk passed +1 mvnsite 0m 19s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 0m 29s trunk passed +1 javadoc 0m 12s trunk passed with JDK v1.8.0_77 +1 javadoc 0m 14s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 13s the patch passed +1 compile 0m 9s the patch passed with JDK v1.8.0_77 +1 javac 0m 9s the patch passed +1 compile 0m 11s the patch passed with JDK v1.7.0_95 +1 javac 0m 11s the patch passed -1 checkstyle 0m 12s hadoop-tools/hadoop-aws: patch generated 1 new + 38 unchanged - 0 fixed = 39 total (was 38) +1 mvnsite 0m 15s the patch passed +1 mvneclipse 0m 11s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 0m 37s the patch passed +1 javadoc 0m 10s the patch passed with JDK v1.8.0_77 +1 javadoc 0m 12s the patch passed with JDK v1.7.0_95 +1 unit 0m 10s hadoop-aws in the patch passed with JDK v1.8.0_77. +1 unit 0m 12s hadoop-aws in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 17s Patch does not generate ASF License warnings. 12m 55s Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12796255/HADOOP-11687.003.patch JIRA Issue HADOOP-11687 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 7a446162a366 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / acca149 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/8986/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8986/testReport/ modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8986/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          qwertymaniac Harsh J added a comment -

          All of the S3A tests pass with this change, when running mvn test under hadoop-aws module. Awaiting new jenkins results.

          Show
          qwertymaniac Harsh J added a comment - All of the S3A tests pass with this change, when running mvn test under hadoop-aws module. Awaiting new jenkins results.
          Hide
          qwertymaniac Harsh J added a comment -

          Updated patch

          • Added null checks during clone on fields that are nullable, as they appear to cause a problem later during the CopyObject operation if a null gets through
          • Fixed the findbugs suggestion
          • Unrelated but added a note about core-site.xml for testing
          Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRename
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 130.767 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRename
          
          Show
          qwertymaniac Harsh J added a comment - Updated patch Added null checks during clone on fields that are nullable, as they appear to cause a problem later during the CopyObject operation if a null gets through Fixed the findbugs suggestion Unrelated but added a note about core-site.xml for testing Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRename Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 130.767 sec - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRename
          Hide
          qwertymaniac Harsh J added a comment -

          Thanks Steve Loughran, am running the tests of rename operation currently. Will update with results of the test and changes/new patch as required.

          Show
          qwertymaniac Harsh J added a comment - Thanks Steve Loughran , am running the tests of rename operation currently. Will update with results of the test and changes/new patch as required.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 12s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 7m 46s trunk passed
          +1 compile 0m 14s trunk passed with JDK v1.8.0_77
          +1 compile 0m 14s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 14s trunk passed
          +1 mvnsite 0m 20s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 0m 33s trunk passed
          +1 javadoc 0m 14s trunk passed with JDK v1.8.0_77
          +1 javadoc 0m 15s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 14s the patch passed
          +1 compile 0m 10s the patch passed with JDK v1.8.0_77
          +1 javac 0m 10s the patch passed
          +1 compile 0m 12s the patch passed with JDK v1.7.0_95
          +1 javac 0m 12s the patch passed
          -1 checkstyle 0m 12s hadoop-tools/hadoop-aws: patch generated 1 new + 38 unchanged - 0 fixed = 39 total (was 38)
          +1 mvnsite 0m 17s the patch passed
          +1 mvneclipse 0m 11s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 findbugs 0m 44s hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 javadoc 0m 11s the patch passed with JDK v1.8.0_77
          +1 javadoc 0m 13s the patch passed with JDK v1.7.0_95
          +1 unit 0m 9s hadoop-aws in the patch passed with JDK v1.8.0_77.
          +1 unit 0m 12s hadoop-aws in the patch passed with JDK v1.7.0_95.
          +1 asflicense 0m 18s Patch does not generate ASF License warnings.
          14m 21s



          Reason Tests
          FindBugs module:hadoop-tools/hadoop-aws
            org.apache.hadoop.fs.s3a.S3AFileSystem.cloneObjectMetadata(ObjectMetadata) makes inefficient use of keySet iterator instead of entrySet iterator At S3AFileSystem.java:keySet iterator instead of entrySet iterator At S3AFileSystem.java:[line 1266]



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12796242/HADOOP-11687.002.patch
          JIRA Issue HADOOP-11687
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 08b6edfd647d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 32c0c3e
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/8983/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/8983/artifact/patchprocess/new-findbugs-hadoop-tools_hadoop-aws.html
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8983/testReport/
          modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8983/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 12s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 7m 46s trunk passed +1 compile 0m 14s trunk passed with JDK v1.8.0_77 +1 compile 0m 14s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 14s trunk passed +1 mvnsite 0m 20s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 0m 33s trunk passed +1 javadoc 0m 14s trunk passed with JDK v1.8.0_77 +1 javadoc 0m 15s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 14s the patch passed +1 compile 0m 10s the patch passed with JDK v1.8.0_77 +1 javac 0m 10s the patch passed +1 compile 0m 12s the patch passed with JDK v1.7.0_95 +1 javac 0m 12s the patch passed -1 checkstyle 0m 12s hadoop-tools/hadoop-aws: patch generated 1 new + 38 unchanged - 0 fixed = 39 total (was 38) +1 mvnsite 0m 17s the patch passed +1 mvneclipse 0m 11s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 findbugs 0m 44s hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 0m 11s the patch passed with JDK v1.8.0_77 +1 javadoc 0m 13s the patch passed with JDK v1.7.0_95 +1 unit 0m 9s hadoop-aws in the patch passed with JDK v1.8.0_77. +1 unit 0m 12s hadoop-aws in the patch passed with JDK v1.7.0_95. +1 asflicense 0m 18s Patch does not generate ASF License warnings. 14m 21s Reason Tests FindBugs module:hadoop-tools/hadoop-aws   org.apache.hadoop.fs.s3a.S3AFileSystem.cloneObjectMetadata(ObjectMetadata) makes inefficient use of keySet iterator instead of entrySet iterator At S3AFileSystem.java:keySet iterator instead of entrySet iterator At S3AFileSystem.java: [line 1266] Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12796242/HADOOP-11687.002.patch JIRA Issue HADOOP-11687 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 08b6edfd647d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 32c0c3e Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/8983/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/8983/artifact/patchprocess/new-findbugs-hadoop-tools_hadoop-aws.html JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8983/testReport/ modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8983/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          qwertymaniac Harsh J added a comment -

          Updating patch to address Steve's earlier review comment.

          Show
          qwertymaniac Harsh J added a comment - Updating patch to address Steve's earlier review comment.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          There's no way to do add headers to the object except via reflection? In that case, we can skip that; the s3x tests would benefit from more functional tests and test runs, really.

          The reason they are all skipping is that it was causing problems on other s3 API infrastructures, and there are other headers than just closed.

          LGTM.

          Harsh: I'll give you privilege of testing this, if you are happy then put it in

          Show
          stevel@apache.org Steve Loughran added a comment - There's no way to do add headers to the object except via reflection? In that case, we can skip that; the s3x tests would benefit from more functional tests and test runs, really. The reason they are all skipping is that it was causing problems on other s3 API infrastructures, and there are other headers than just closed. LGTM. Harsh: I'll give you privilege of testing this, if you are happy then put it in
          Hide
          qwertymaniac Harsh J added a comment -

          Yes this can solve HADOOP-12970 too. I am unsure if skipping all headers would be harmful in any way though (in my approach I only skipped the problematic header vs. all of them).

          For a test case are we looking at a manual Reflection-based injection of the bad header? I can help with adding such a test if needed. It looks like this is currently being worked on by Aaron Peterson, so I'll defer to him.

          Show
          qwertymaniac Harsh J added a comment - Yes this can solve HADOOP-12970 too. I am unsure if skipping all headers would be harmful in any way though (in my approach I only skipped the problematic header vs. all of them). For a test case are we looking at a manual Reflection-based injection of the bad header? I can help with adding such a test if needed. It looks like this is currently being worked on by Aaron Peterson , so I'll defer to him.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          HADOOP-12970 shows that connection closed headers can break things. This filter needs to strip them.

          Can we have a test for this operation?

          Show
          stevel@apache.org Steve Loughran added a comment - HADOOP-12970 shows that connection closed headers can break things. This filter needs to strip them. Can we have a test for this operation?
          Hide
          stevel@apache.org Steve Loughran added a comment -

          replace

          if (source.getUserMetadata().isEmpty() != Boolean.TRUE) {
            java.util.Map<String, String> smd = source.getUserMetadata();
            for (String key : smd.keySet()) {
              ret.addUserMetadata(key, smd.get(key));
            }
          }
          

          with

            Map<String, String> smd = source.getUserMetadata();
            for (String key : smd.keySet()) {
              ret.addUserMetadata(key, smd.get(key));
            }
          

          and rely on the loop not happening if keyset is empty

          Show
          stevel@apache.org Steve Loughran added a comment - replace if (source.getUserMetadata().isEmpty() != Boolean .TRUE) { java.util.Map< String , String > smd = source.getUserMetadata(); for ( String key : smd.keySet()) { ret.addUserMetadata(key, smd.get(key)); } } with Map< String , String > smd = source.getUserMetadata(); for ( String key : smd.keySet()) { ret.addUserMetadata(key, smd.get(key)); } and rely on the loop not happening if keyset is empty
          Hide
          Thomas Demoor Thomas Demoor added a comment -

          This should also fix another bug. The current clone implementation also copies reponse headers (f.i. ETag, Accept-Ranges) from the source object to the new copy request. AWS can handle these superfluous headers but other implementations might not handle this well.

          Looks OK at fist glance, slight concerns:

          • have we got everything? Will review in more detail later.
          • we'll need to keep this up to date with every AWS version bump (check if they have added / changed something).
          Show
          Thomas Demoor Thomas Demoor added a comment - This should also fix another bug. The current clone implementation also copies reponse headers (f.i. ETag, Accept-Ranges) from the source object to the new copy request. AWS can handle these superfluous headers but other implementations might not handle this well. Looks OK at fist glance, slight concerns: have we got everything? Will review in more detail later. we'll need to keep this up to date with every AWS version bump (check if they have added / changed something).
          Hide
          petera5 Aaron Peterson added a comment -

          Potential patch file - Peterson, Aaron

          Show
          petera5 Aaron Peterson added a comment - Potential patch file - Peterson, Aaron
          Hide
          petera5 Aaron Peterson added a comment -

          Implementing a different cloning method seems to alleviate the COPY issue:

          1092c1092
          < final ObjectMetadata dstom = srcom.clone();

          > ObjectMetadata dstom = copyOfObjMetadata(srcom);
          1197a1198,1224
          > // Constructs copy of object metadata
          > private ObjectMetadata copyOfObjMetadata(ObjectMetadata source) {
          > ObjectMetadata ret = new ObjectMetadata();
          > ret.setCacheControl(source.getCacheControl());
          > ret.setContentDisposition(source.getContentDisposition());
          > ret.setContentEncoding(source.getContentEncoding());
          > ret.setContentLength(source.getContentLength());
          > ret.setContentMD5(source.getContentMD5());
          > ret.setContentType(source.getContentType());
          > ret.setExpirationTime(source.getExpirationTime());
          > ret.setExpirationTimeRuleId(source.getExpirationTimeRuleId());
          > ret.setHttpExpiresDate(source.getHttpExpiresDate());
          > ret.setLastModified(source.getLastModified());
          > ret.setOngoingRestore(source.getOngoingRestore());
          > ret.setRestoreExpirationTime(source.getRestoreExpirationTime());
          > ret.setSSEAlgorithm(source.getSSEAlgorithm());
          > ret.setSSECustomerAlgorithm(source.getSSECustomerAlgorithm());
          > ret.setSSECustomerKeyMd5(source.getSSECustomerKeyMd5());
          > if (source.getUserMetadata().isEmpty() != Boolean.TRUE) {
          > java.util.Map<String, String> smd = source.getUserMetadata();
          > for (String key : smd.keySet())

          { > ret.addUserMetadata(key, smd.get(key)); > }

          > }
          > return ret;
          > }
          >

          Show
          petera5 Aaron Peterson added a comment - Implementing a different cloning method seems to alleviate the COPY issue: 1092c1092 < final ObjectMetadata dstom = srcom.clone(); — > ObjectMetadata dstom = copyOfObjMetadata(srcom); 1197a1198,1224 > // Constructs copy of object metadata > private ObjectMetadata copyOfObjMetadata(ObjectMetadata source) { > ObjectMetadata ret = new ObjectMetadata(); > ret.setCacheControl(source.getCacheControl()); > ret.setContentDisposition(source.getContentDisposition()); > ret.setContentEncoding(source.getContentEncoding()); > ret.setContentLength(source.getContentLength()); > ret.setContentMD5(source.getContentMD5()); > ret.setContentType(source.getContentType()); > ret.setExpirationTime(source.getExpirationTime()); > ret.setExpirationTimeRuleId(source.getExpirationTimeRuleId()); > ret.setHttpExpiresDate(source.getHttpExpiresDate()); > ret.setLastModified(source.getLastModified()); > ret.setOngoingRestore(source.getOngoingRestore()); > ret.setRestoreExpirationTime(source.getRestoreExpirationTime()); > ret.setSSEAlgorithm(source.getSSEAlgorithm()); > ret.setSSECustomerAlgorithm(source.getSSECustomerAlgorithm()); > ret.setSSECustomerKeyMd5(source.getSSECustomerKeyMd5()); > if (source.getUserMetadata().isEmpty() != Boolean.TRUE) { > java.util.Map<String, String> smd = source.getUserMetadata(); > for (String key : smd.keySet()) { > ret.addUserMetadata(key, smd.get(key)); > } > } > return ret; > } >
          Hide
          djannot Denis Jannot added a comment -

          By modifying the code below to remove all the x-* metadata which aren't x-amz-* metadata

          ObjectMetadata srcom = s3.getObjectMetadata(bucket, srcKey);
          final ObjectMetadata dstom = srcom.clone();
          if (StringUtils.isNotBlank(serverSideEncryptionAlgorithm)) {
          dstom.setServerSideEncryption(serverSideEncryptionAlgorithm);
          }
          CopyObjectRequest copyObjectRequest = new CopyObjectRequest(bucket, srcKey, bucket, dstKey);
          copyObjectRequest.setCannedAccessControlList(cannedACL);
          copyObjectRequest.setNewObjectMetadata(dstom);

          Show
          djannot Denis Jannot added a comment - By modifying the code below to remove all the x-* metadata which aren't x-amz-* metadata ObjectMetadata srcom = s3.getObjectMetadata(bucket, srcKey); final ObjectMetadata dstom = srcom.clone(); if (StringUtils.isNotBlank(serverSideEncryptionAlgorithm)) { dstom.setServerSideEncryption(serverSideEncryptionAlgorithm); } CopyObjectRequest copyObjectRequest = new CopyObjectRequest(bucket, srcKey, bucket, dstKey); copyObjectRequest.setCannedAccessControlList(cannedACL); copyObjectRequest.setNewObjectMetadata(dstom);
          Hide
          stevel@apache.org Steve Loughran added a comment -

          doesn't s3a construct its signatures inside the aws-toolkit? If so, how do you propose addressing this?

          Show
          stevel@apache.org Steve Loughran added a comment - doesn't s3a construct its signatures inside the aws-toolkit? If so, how do you propose addressing this?

            People

            • Assignee:
              qwertymaniac Harsh J
              Reporter:
              djannot Denis Jannot
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development