Details
-
Improvement
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
None
Description
This came up when using the s3 for the file system backend and running under ECS.
With no credentials in the container, hadoop-aws will default to EC2 instance level credentials when accessing S3. However when running under ECS, you will generally want to default to the task definition's IAM role.
In this case you need to set the hadoop property
fs.s3a.aws.credentials.provider
to a fully qualified class name(s). see hadoop-aws docs
This works as expected when you add this setting to flink-conf.yaml but there is a further 'gotcha.' Because the AWS sdk is shaded, the actual full class name for, in this case, the ContainerCredentialsProvider is
org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider
meaning the full setting is:
fs.s3a.aws.credentials.provider: org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider
If you instead set it to the unshaded class name you will see a very confusing error stating that the ContainerCredentialsProvider doesn't implement AWSCredentialsProvider (which it most certainly does.)
Adding this information (how to specify alternate Credential Providers, and the name space gotcha) to the AWS deployment docs would be useful to anyone else using S3.
Attachments
Issue Links
- is related to
-
FLINK-13044 Shading of AWS SDK in flink-s3-fs-hadoop results in ClassNotFoundExceptions
- Closed
- links to