Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
Description
In the FileSystem API in Hadoop, there is a method to get some server defaults.
In Ozone's filesystem implementation this call is not implemented, so that defaults to the implementation that is provided in the FileSystem class.
The FileSystem class itself provides defaults by default based on the client's configuration, which is overridden for HDFS within the DistributedFileSystem class in Hadoop.
Our implementations does not override this, and we do not provide any server side configs to the client side at the moment.
We seen a problematic use case recently, when one client on one cluster tries to read encrypted data on an other cluster. In HDFS this works, as the hadoop.security.key.provider.path is part of the server defaults provided to the client by the NameNode, and the client is using it unless dfs.client.ignore.namenode.default.kms.uri is configured to be true, it is false by default.
If we want to enable this use case where a client needs to communicate with encryption zones on multiple clusters, then we need to resolve providing this information to the client side. I believe this should be solved for the FileSystem API based clients and for the Ozone client itself also.
I don't believe our S3 API is affected by this problem.
Attachments
Issue Links
- relates to
-
HDDS-11371 getServerDefaults API call fails when OM version is old
- Resolved
-
HADOOP-14104 Client should always ask namenode for kms provider path.
- Resolved
-
HDDS-11332 Add a docker test to verify OM's KMS is used on client side if present.
- Open
- links to