Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-14831 Über-jira: S3a phase IV: Hadoop 3.1 features
  3. HADOOP-15216

S3AInputStream to handle reconnect on read() failure better

VotersWatch issueWatchersLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 3.0.0
    • None
    • fs/s3
    • None

    Description

      S3AInputStream handles any IOE through a close() of stream and single re-invocation of the read, with

      • no backoff
      • no abort of the HTTPS connection, which is just returned to the pool, If httpclient hasn't noticed the failure, it may get returned to the caller on the next read

      Proposed

      • switch to invoker
      • retry policy explicitly for stream (EOF => throw, timeout => close, sleep, retry, etc)

      We could think about extending the fault injection to inject stream read failures intermittently too, though it would need something in S3AInputStream to (optionally) wrap the http input streams with the failing stream.

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned
            stevel@apache.org Steve Loughran
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment