-
Type:
Improvement
-
Status: Resolved
-
Priority:
Major
-
Resolution: Fixed
-
Affects Version/s: None
-
Fix Version/s: 2.9.0, 3.0.0-alpha4, 2.8.2
-
Component/s: kms
-
Labels:None
-
Target Version/s:
According to current implementation of kms provider in client conf, there can only be one kms.
In multi-cluster environment, if a client is reading encrypted data from multiple clusters it will only get kms token for local cluster.
Not sure whether the target version is correct or not.
- breaks
-
HDFS-13696 Distcp between 2 secure clusters and non encrypted zones fails with connection timeout
-
- Open
-
-
HADOOP-14814 Fix incompatible API change on FsServerDefaults to HADOOP-14104
-
- Resolved
-
-
HDFS-11689 New exception thrown by DFSClient#isHDFSEncryptionEnabled broke hacky hive code
-
- Resolved
-
-
HDFS-11702 Remove indefinite caching of key provider uri in DFSClient
-
- Resolved
-
- is related to
-
HIVE-16490 Hive should not use private HDFS APIs for encryption
-
- Resolved
-
-
HDFS-11687 Add new public encryption APIs required by Hive
-
- Resolved
-
- relates to
-
HDFS-13371 NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.X
-
- Resolved
-
-
HADOOP-16350 Ability to tell HDFS client not to request KMS Information from NameNode
-
- Resolved
-