Uploaded image for project: 'Jackrabbit Oak'
  1. Jackrabbit Oak
  2. OAK-6565

GetBlobResponseEncoder should not write all chunks at once

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 1.7.6
    • 1.7.6, 1.8.0
    • segment-tar

    Description

      GetBlobResponseEncoder writes too fast all the chunks, leaving the channel in a not-writable state, after the first write. The problem is not visible at a first glance, especially when using small blobs for testing. Increasing the blobs size, as done for OAK-6538, revealed the problem. Not only this triggers hidden OutOfMemory errors on either server or client, but sometimes incomplete blobs are sent along, which are interpreted by the client as valid.

      A more elegant solution, which also solves the memory consumption problem, would be to use ChunkedWriteHandler which employs complex logic on how and when to write the chunks. ChunkedWriteHandler must be used in conjunction with a custom ChunkedInput<ByteBuf> implementation to generate header + payload chunks from an InputStream, as done currently. This way the server will send more chunks only when the previous one was consumed by the client.

      /cc Francesco Mari

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            adulceanu Andrei Dulceanu
            adulceanu Andrei Dulceanu
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment