Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
Description
According to current implementation of kms provider in client conf, there can only be one kms.
In multi-cluster environment, if a client is reading encrypted data from multiple clusters it will only get kms token for local cluster.
Not sure whether the target version is correct or not.
Attachments
Attachments
Issue Links
- breaks
-
HDFS-13696 Distcp between 2 secure clusters and non encrypted zones fails with connection timeout
- Open
-
HADOOP-14814 Fix incompatible API change on FsServerDefaults to HADOOP-14104
- Resolved
-
HDFS-11689 New exception thrown by DFSClient#isHDFSEncryptionEnabled broke hacky hive code
- Resolved
-
HDFS-11702 Remove indefinite caching of key provider uri in DFSClient
- Resolved
- is related to
-
HDDS-11227 Use OM's KMS from client side when connecting to a cluster and dealing with encrypted data
- Resolved
-
HDFS-11687 Add new public encryption APIs required by Hive
- Resolved
-
HIVE-16490 Hive should not use private HDFS APIs for encryption
- Closed
- relates to
-
HDFS-13371 NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.X
- Resolved
-
HADOOP-16350 Ability to tell HDFS client not to request KMS Information from NameNode
- Resolved