After chatting with Jon offline, it appears that we could write to S3 with zero code changes in Flume via the HDFS integration outlined at http://wiki.apache.org/hadoop/AmazonS3 by simply including some jar files and setting some configuration variables.
The jars required:
The configuration variables that need to change are indicated on the wiki page linked above; essentially you need to tell Flume how to authenticate with AWS.
Once the jar files and configuration variables are set, just use collectorSink("s3n://my-bucket/my-dir", "my-file-name-prefix").
I'm going to test this one now...