Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-19353 Über-jira: S3A Hadoop 3.4.2 features
  3. HADOOP-17017

S3A client retries on SSL Auth exceptions triggered by "." bucket names

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 3.2.1
    • None
    • fs/s3
    • None

    Description

      If you have a "." in bucket names (it's allowed!) then virtual host HTTPS connections fail with a java.net.ssl exception. Except we retry and the inner cause is wrapped by generic "client exceptions"

      I'm not going to try and be clever about fixing this, but we should

      • make sure that the inner exception is raised up
      • avoid retries
      • document it in the troubleshooting page.
      • if there is a well known public "." bucket (cloudera has some) we can test

      I get a vague suspicion the AWS SDK is retrying too. Not much we can do there.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              stevel@apache.org Steve Loughran
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated: