S3AFileSystem's use of ObjectMetadata#clone() method inside the copyFile implementation may fail in circumstances where the connection used for obtaining the metadata is closed by the server (i.e. response carries a Connection: close header). Due to this header not being stripped away when the ObjectMetadata is created, and due to us cloning it for use in the next CopyObjectRequest, it causes the request to use Connection: close headers as a part of itself.
This causes signer related exceptions because the client now includes the Connection header as part of the SignedHeaders, but the S3 server does not receive the same value for it (Connection headers are likely stripped away before the S3 Server tries to match signature hashes), causing a failure like below:
This is intermittent because the S3 Server does not always add a Connection: close directive in its response, but whenever we receive it AND we clone it, the above exception would happen for the copy request. The copy request is often used in the context of FileOutputCommitter, when a lot of the MR attempt files on s3a:// destination filesystem are to be moved to their parent directories post-commit.
I've also submitted a fix upstream with AWS Java SDK to strip out the Connection headers when dealing with ObjectMetadata, which is pending acceptance and release at: https://github.com/aws/aws-sdk-java/pull/669, but until that release is available and can be used by us, we'll need to workaround the clone approach by manually excluding the Connection header (not straight-forward due to the metadata object being private with no mutable access). We can remove such a change in future when there's a release available with the upstream fix.