Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.13.1
    • Component/s: Storage
    • Labels:
      None

      Description

      I wrote a small script that uploaded the output of a buildbot job and then updated an XML file. The large binary blob worked fine. However the XML file failed.

      I was using the driver.upload_object_via_stream(iterator=StringIO.StringIO(somexml)) style as in the docs.

      Looking at the LIBCLOUD_DEBUG output the driver was using the S3 multi-part upload API and making a new "part" for each line - so every 7 bytes or so - but the minimum size for a part upload was 5mb.

      (I don't know if the first part is allowed to be less than 5mb if the entire upload is less than 5mb).

      I am working around this by forcing multi-part uploads off.

        Activity

        Hide
        jc2k John Carr added a comment -

        I think this is because read_chunks has fill_size=False by default. Reading the docstring we should set it to True for S3 multi-part uploads to work in my case.

        It would be nice to fall back to the non-multi-part API's if the first chunk was less than 5mb - but i dont know how hard it would be to refactor the code like that.

        Show
        jc2k John Carr added a comment - I think this is because read_chunks has fill_size=False by default. Reading the docstring we should set it to True for S3 multi-part uploads to work in my case. It would be nice to fall back to the non-multi-part API's if the first chunk was less than 5mb - but i dont know how hard it would be to refactor the code like that.
        Hide
        kami Tomaz Muraus added a comment -

        Mahendra M ping - would be great if you could have a look at this

        Show
        kami Tomaz Muraus added a comment - Mahendra M ping - would be great if you could have a look at this
        Hide
        mahendra.m Mahendra M added a comment -

        Tomaz Muraussure! will look into it.

        Show
        mahendra.m Mahendra M added a comment - Tomaz Muraus sure! will look into it.
        Hide
        mahendra.m Mahendra M added a comment -

        Yep, I agree with John Carr. Setting fill_size=True makes it work. (Strange, I always thought that fill_size was the default action).

        Uploaded a patch for this.

        Refactoring the code for making it fall to a normal upload for size less than 4MB would make the code a bit complicated. I will look into it

        Show
        mahendra.m Mahendra M added a comment - Yep, I agree with John Carr . Setting fill_size=True makes it work. (Strange, I always thought that fill_size was the default action). Uploaded a patch for this. Refactoring the code for making it fall to a normal upload for size less than 4MB would make the code a bit complicated. I will look into it
        Hide
        kami Tomaz Muraus added a comment -

        I just checked the docs (http://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html) and it says:

        Part size 5 MB to 5 GB, last part can be < 5 MB

        I would image that it should work if there is only one part and it's smaller than 5 MB, because first part=last part.

        In any case it would still be good to test that.

        Mahendra M Can you please test the thing I've mentioned above (uploading a small file which results in a single chuck)?

        If it works, please update the CHANGES file and feel free to merge this patch into trunk.

        Thanks.

        Show
        kami Tomaz Muraus added a comment - I just checked the docs ( http://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html ) and it says: Part size 5 MB to 5 GB, last part can be < 5 MB I would image that it should work if there is only one part and it's smaller than 5 MB, because first part=last part. In any case it would still be good to test that. Mahendra M Can you please test the thing I've mentioned above (uploading a small file which results in a single chuck)? If it works, please update the CHANGES file and feel free to merge this patch into trunk. Thanks.
        Hide
        mahendra.m Mahendra M added a comment -

        Hi Tomaz Muraus,

        Your assumption is correct. If you call the multipart API with the first part = last part, the upload works. I have tested the patch for the same.

        Will merge it in a short while.

        Show
        mahendra.m Mahendra M added a comment - Hi Tomaz Muraus , Your assumption is correct. If you call the multipart API with the first part = last part, the upload works. I have tested the patch for the same. Will merge it in a short while.
        Hide
        mahendra.m Mahendra M added a comment -

        BTW, where should I merge it? To some GIT repo or SVN repo?

        Show
        mahendra.m Mahendra M added a comment - BTW, where should I merge it? To some GIT repo or SVN repo?
        Hide
        kami Tomaz Muraus added a comment -

        To our official git repository - https://git-wip-us.apache.org/repos/asf/libcloud.git

        We've recently switched from svn to git, so svn repository is now read-only and obsolete.

        (need to write a committer guide)

        Show
        kami Tomaz Muraus added a comment - To our official git repository - https://git-wip-us.apache.org/repos/asf/libcloud.git We've recently switched from svn to git, so svn repository is now read-only and obsolete. (need to write a committer guide)
        Hide
        jira-bot ASF subversion and git services added a comment -

        Commit 735b5b877f2750e75aca07e351d3d8ae7a675fac in branch refs/heads/trunk from Mahendra M
        [ https://git-wip-us.apache.org/repos/asf?p=libcloud.git;h=735b5b8 ]

        LIBCLOUD-378: S3 uploads fail on small iterators

        Show
        jira-bot ASF subversion and git services added a comment - Commit 735b5b877f2750e75aca07e351d3d8ae7a675fac in branch refs/heads/trunk from Mahendra M [ https://git-wip-us.apache.org/repos/asf?p=libcloud.git;h=735b5b8 ] LIBCLOUD-378 : S3 uploads fail on small iterators
        Hide
        mahendra.m Mahendra M added a comment -

        Should I set this issue to "Resolved"? or will it happen automatically?

        Show
        mahendra.m Mahendra M added a comment - Should I set this issue to "Resolved"? or will it happen automatically?
        Hide
        kami Tomaz Muraus added a comment -

        It doesn't happen automatically, you need to mark it manually.

        (I also need to document the issue workflow)

        Show
        kami Tomaz Muraus added a comment - It doesn't happen automatically, you need to mark it manually. (I also need to document the issue workflow)
        Hide
        kami Tomaz Muraus added a comment -

        (Dunno if you have noticed, but the tests failed - http://ci.apache.org/builders/libcloud-trunk-tox/builds/273. Build notifications go to commits@libcloud.apache.org)

        Show
        kami Tomaz Muraus added a comment - (Dunno if you have noticed, but the tests failed - http://ci.apache.org/builders/libcloud-trunk-tox/builds/273 . Build notifications go to commits@libcloud.apache.org)
        Hide
        mahendra.m Mahendra M added a comment -

        Oh damn!! Will fix it in a while. My mistake!!


        Mahendra

        http://twitter.com/mahendra

        Show
        mahendra.m Mahendra M added a comment - Oh damn!! Will fix it in a while. My mistake!! – Mahendra http://twitter.com/mahendra
        Hide
        jira-bot ASF subversion and git services added a comment -

        Commit 326e81f09902514a290f5aeb292426b56e2e95b7 in branch refs/heads/trunk from [~mahi1216]
        [ https://git-wip-us.apache.org/repos/asf?p=libcloud.git;h=326e81f ]

        Fix s3 multipart test cases broken by LIBCLOUD-378

        The breakage was caused by an overzealous check.
        Also added test cases for checking small and big uploads via
        S3 multipart upload API

        Show
        jira-bot ASF subversion and git services added a comment - Commit 326e81f09902514a290f5aeb292426b56e2e95b7 in branch refs/heads/trunk from [~mahi1216] [ https://git-wip-us.apache.org/repos/asf?p=libcloud.git;h=326e81f ] Fix s3 multipart test cases broken by LIBCLOUD-378 The breakage was caused by an overzealous check. Also added test cases for checking small and big uploads via S3 multipart upload API
        Hide
        jira-bot ASF subversion and git services added a comment -

        Commit fedd709cac9e406871310d73ce2c54d5da3a9496 in branch refs/heads/0.13.1 from [~mahi1216]
        [ https://git-wip-us.apache.org/repos/asf?p=libcloud.git;h=fedd709 ]

        Fix s3 multipart test cases broken by LIBCLOUD-378

        The breakage was caused by an overzealous check.
        Also added test cases for checking small and big uploads via
        S3 multipart upload API

        Show
        jira-bot ASF subversion and git services added a comment - Commit fedd709cac9e406871310d73ce2c54d5da3a9496 in branch refs/heads/0.13.1 from [~mahi1216] [ https://git-wip-us.apache.org/repos/asf?p=libcloud.git;h=fedd709 ] Fix s3 multipart test cases broken by LIBCLOUD-378 The breakage was caused by an overzealous check. Also added test cases for checking small and big uploads via S3 multipart upload API
        Hide
        jira-bot ASF subversion and git services added a comment -

        Commit db4cfbdccd1358450ce513d64eb872c637b273e5 in branch refs/heads/0.13.1 from Mahendra M
        [ https://git-wip-us.apache.org/repos/asf?p=libcloud.git;h=db4cfbd ]

        LIBCLOUD-378: S3 uploads fail on small iterators

        Show
        jira-bot ASF subversion and git services added a comment - Commit db4cfbdccd1358450ce513d64eb872c637b273e5 in branch refs/heads/0.13.1 from Mahendra M [ https://git-wip-us.apache.org/repos/asf?p=libcloud.git;h=db4cfbd ] LIBCLOUD-378 : S3 uploads fail on small iterators

          People

          • Assignee:
            mahendra.m Mahendra M
            Reporter:
            jc2k John Carr
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development