When you do a hadoop fs -rm -s3a://bucket/large-file there's a long delay and then you are told that it's been moved to s3a://Users/stevel/.Trash/current/large-file. Where it still incurs costs. You need to then delete that file using -skipTrash because the fs -expunge command only works on the local fs: you can't point it at an object store unless that is the default FS.
I'd like an option to tell the shell to tell it that it should bypass the renaming on an FS-by-FS basis. And the for fs expunge to take a filesystem as an optional argument.