Description
Per-bucket jceks support turns out to be complex as you have to manage multiple jecks files & configure the client to ask for the right one. This is because we're calling Configuration.getPassword{"fs,s3a.secret.key".
If before that, we do a check for the explict id, key, session key in the properties fs.s3a.$bucket.secret ( & c), we could have a single JCEKs file with all the secrets for different bucket. You would only need to explicitly point the base config to the secrets file, and the right credentials would be picked up, if set
Attachments
Attachments
Issue Links
- contains
-
HADOOP-14723 reinstate URI parameter in AWSCredentialProvider constructors
- Resolved
- depends upon
-
HADOOP-14723 reinstate URI parameter in AWSCredentialProvider constructors
- Resolved
- is broken by
-
HADOOP-14135 Remove URI parameter in AWSCredentialProvider constructors
- Resolved
- is depended upon by
-
HADOOP-14614 Hive doesn't let s3a patch the credential provider path
- Resolved
- is related to
-
HADOOP-14821 Executing the command 'hdfs -Dhadoop.security.credential.provider.path=file1.jceks,file2.jceks' fails if permission is denied to some files
- Open
-
HADOOP-13972 ADLS to support per-store configuration
- Resolved
- relates to
-
HADOOP-14324 Refine S3 server-side-encryption key as encryption secret; improve error reporting and diagnostics
- Resolved
- supercedes
-
HADOOP-14625 error message in S3AUtils.getServerSideEncryptionKey() needs to expand property constant
- Resolved