Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-16829 Über-jira: S3A Hadoop 3.3.1 features
  3. HADOOP-14937

initial part uploads seem to block unnecessarily in S3ABlockOutputStream

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Cannot Reproduce
    • 3.0.0-beta1
    • None
    • fs/s3
    • None

    Description

      From looking at a YourKit snapshot of an FsShell process running a hadoop fs -put file:///... s3a://..., it seems that the first part in the multipart upload doesn't begin to upload until n of the s3a-transfer-shared-pool threads are able to start uploading, where n is the value of fs.s3a.fast.upload.active.blocks.

      To hopefully clarify a bit, the series of events that I expected to see with fs.s3a.fast.upload.active.blocks set to 4 is:

      1. An amount of data equal to fs.s3a.multipart.size is buffered into off-heap memory (I have fs.s3a.fast.upload.buffer = bytebuffer).
      2. As soon as that happens, a thread begins to upload that part. Meanwhile, the main thread continues to buffer data into off-heap memory.
      3. Once another part has been buffered into off-heap memory, a separate thread uploads that part, and so on.

      Whereas what I think the YK snapshot shows happening is:

      1. An amount of data equal to fs.s3a.multipart.size * 4 is buffered into off-heap memory.
      2. Four threads start to upload one part each at the same time.

      I've attached a picture of the "Threads" tab to show what I mean. Basically the times at which the first four s3a-transfer-shared-pool threads start to upload are roughly the same, whereas I would've expected them to be more staggered.

      I'm actually not sure whether this is the expected behavior or not, so feel free to close if this doesn't come as a surprise to anyone.

      For some context, I've been trying to get a sense for roughly which values of fs.s3a.multipart.size perform the best at different file sizes. One thing that I found confusing is that a part size of 5 MB seems to outperform a part size of 64 MB up until files that are upwards of about 500 MB in size. This seems odd, since each uploadPart call is its own HTTP request, and I would've expected the overhead of those to become costly at small part sizes. My suspicion is that with 4 concurrent part uploads and 64 MB blocks, we have to wait until 256 MB are buffered before we can start uploading, while with 5 MB blocks we can start uploading as soon as we buffer 20 MB, and that's what gives the smaller parts the advantage for smaller files.

      I'm happy to submit a patch if this is in fact a problem, but wanted to check to make sure I'm not just misunderstanding something.

      Attachments

        1. yjp_threads.png
          62 kB
          Steven Rand

        Activity

          People

            Steven Rand Steven Rand
            Steven Rand Steven Rand
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: