Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 2.9.0, 3.0.0, 3.1.0
    • Fix Version/s: 3.1.0, 3.0.3
    • Component/s: fs/s3
    • Labels:
      None
    • Environment:

      Hadoop 3.1 Snapshot

    • Target Version/s:
    • Flags:
      Patch

      Description

      When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size to 5 Mb, storing data in AWS doesn't work anymore. For example, running the following code:

      >>> df1 = spark.read.json('/home/user/people.json')
      >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
      

      shows the following exception:

      com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload initiate requested encryption. Subsequent part requests must include the appropriate encryption parameters.
      

      After some investigation, I discovered that hadoop-aws doesn't send SSE-C headers in Put Object Part as stated in AWS specification: https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html

      If you requested server-side encryption using a customer-provided encryption key in your initiate multipart upload request, you must provide identical encryption information in each part upload using the following headers.
      

       
      You can find a patch attached to this issue for a better clarification of the problem.

        Attachments

        1. HADOOP-15267-001.patch
          2 kB
          Anis Elleuch
        2. HADOOP-15267-002.patch
          2 kB
          Anis Elleuch
        3. HADOOP-15267-003.patch
          5 kB
          Steve Loughran

          Activity

            People

            • Assignee:
              vadmeste Anis Elleuch
              Reporter:
              vadmeste Anis Elleuch
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: