In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk size are very high ,
In internal testing we have found that a lower but still reasonable threshold and chunk size can be extremely beneficial. In our case we set both the threshold and size to 25 MB with good results.
Amazon enforces a minimum of 5 MB .
For the S3A filesystem, file renames are actually implemented via a remote copy request, which is already quite slow compared to a rename on HDFS. This very high threshold for utilizing the multipart functionality can make the performance considerably worse, particularly for files in the 100MB to 5GB range which is fairly typical for mapreduce job outputs.
Two apparent options are:
1) Use the same configuration (fs.s3a.multipart.threshold, fs.s3a.multipart.size) for both. This seems preferable as the accompanying documentation  for these configuration properties actually already says that they are applicable for either "uploads or copies". We just need to add in the missing TransferManagerConfiguration#setMultipartCopyThreshold  and TransferManagerConfiguration#setMultipartCopyPartSize  calls at  like:
2) Add two new configuration properties so that the copy threshold and part size can be independently configured, maybe change the defaults to be lower than Amazon's, set into TransferManagerConfiguration in the same way.
In any case at a minimum if neither of the above options are acceptable changes the config documentation should be adjusted to match the code, noting that fs.s3a.multipart.threshold and fs.s3a.multipart.size are applicable to uploads of new objects only and not copies (i.e. renaming objects).