Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-16900

Very large files can be truncated when written through S3AFileSystem

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.2.1
    • 3.3.1, 3.4.0
    • fs/s3

    Description

      If a written file size exceeds 10,000 * fs.s3a.multipart.size, a corrupt truncation of the S3 object will occur as the maximum number of parts in a multipart upload is 10,000 as specified by the S3 API, and there is an apparent bug where this failure is not fatal allowing the multipart upload operation to be marked as successfully completed without being fully complete.

      Attachments

        Issue Links

          Activity

            People

              mukund-thakur Mukund Thakur
              noslowerdna Andrew Olson
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: