Uploaded image for project: 'Libcloud'
  1. Libcloud
  2. LIBCLOUD-792

S3 SignatureDoesNotMatch

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: Storage
    • Labels:
      None
    • Flags:
      Patch

      Description

      I noticed that sometimes upload fails with SignatureDoesNotMatch error. I tried to debug this, and I got the following notice in the error message from S3:

      <StringToSign>PUT\nE8F2ok2KBiJTfNy91PLz+A==\n\n1452529414\n/mybucket/11.png?partNumber=1&uploadId=DQGexmin_N3usw4giAtLWoRbCWWRWQij2a20xPO_BgSRENZRpgGqhgMm9goSPoHkmkheIMBrPcI_Z2xWZtqMmZcmekuVhpTBre4cuS.nfyP6DQuTPEuAYop5abOHbm2t</StringToSign>
      

      and in the code libcloud/storage/drivers/s3.py:147 string_to_sign equals the following

      string_to_sign = "PUT\nE8F2ok2KBiJTfNy91PLz+A==\n\n1452529414\n/mybucket/11.png?uploadId=DQGexmin_N3usw4giAtLWoRbCWWRWQij2a20xPO_BgSRENZRpgGqhgMm9goSPoHkmkheIMBrPcI_Z2xWZtqMmZcmekuVhpTBre4cuS.nfyP6DQuTPEuAYop5abOHbm2t&partNumber=1"
      

      Obviously the order of GET parameters is not preserved. For me the following patch do the trick, it keeps parameters of multipart upload in order:

      diff --git a/libcloud/storage/drivers/s3.py b/libcloud/storage/drivers/s3.py
      index 77a86de..1b5a5c0 100644
      --- a/libcloud/storage/drivers/s3.py
      +++ b/libcloud/storage/drivers/s3.py
      @@ -20,6 +20,8 @@ import sys
      
       from hashlib import sha1
      
      +from collections import OrderedDict
      +
       try:
           from lxml.etree import Element, SubElement
       except ImportError:
      @@ -520,7 +522,9 @@ class BaseS3StorageDriver(StorageDriver):
               bytes_transferred = 0
               count = 1
               chunks = []
      -        params = {'uploadId': upload_id}
      +        params = OrderedDict()
      +        params['partNumber'] = 0
      +        params['uploadId'] = upload_id
      
               # Read the input data in chunk sizes suitable for AWS
               for data in read_in_chunks(iterator, chunk_size=CHUNK_SIZE,
      

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              valexey Alexey
            • Votes:
              1 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated: