Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.7.0
Description
The before-ANY/shared_initialization.py only regenerates hadop_env if there is a namenode or dfs_type is set to HCFS
def hook(self, env): import params env.set_params(params) setup_users() if params.has_namenode or params.dfs_type == 'HCFS': setup_hadoop_env() setup_java()
This is no longer true because in the latest ambari-server we set dfs_type as follows:
Map<String, ServiceInfo> serviceInfos = ambariMetaInfo.getServices(stackId.getStackName(), stackId.getStackVersion()); for (ServiceInfo serviceInfoInstance : serviceInfos.values()) { if (serviceInfoInstance.getServiceType() != null) { LOG.debug("Adding {} to command parameters for {}", serviceInfoInstance.getServiceType(), serviceInfoInstance.getName()); clusterLevelParams.put(DFS_TYPE, serviceInfoInstance.getServiceType()); break; } }
This iterates over all of the stack service which will find HDFS first, so that the dfs_type will be HDFS instead of HCFS.