Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-15220 Über-jira: S3a phase V: Hadoop 3.2 features
  3. HADOOP-15239

S3ABlockOutputStream.flush() be no-op when stream closed

VotersWatch issueWatchersLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Trivial
    • Resolution: Fixed
    • 2.9.0, 2.8.3, 2.7.5, 3.0.0
    • 3.2.0
    • fs/s3
    • None

    Description

      when you call flush() on a closed S3A output stream, you get a stack trace.

      This can cause problems in code with race conditions across threads, e.g. FLINK-8543.

      we could make it log@warn "stream closed" rather than raise an IOE. It's just a hint, after all.

      Attachments

        1. HADOOP-15239.002.patch
          4 kB
          Gabor Bota
        2. HADOOP-15239.001.patch
          1 kB
          Gabor Bota

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            gabor.bota Gabor Bota
            stevel@apache.org Steve Loughran
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment