Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
2.1.0
-
None
Description
This defect is cloned from AMBARI-12851. Here is the original description:
Hadoop Env configuration defined at the stack level is managed only if HDFS service is slected as part of the deployment. If HDFS is disabled and any alternate FS includes hadoop-env in the stack, the configuration should be understood by the Ambari code and corresponding hadoop-env.sh should be created properly on the hadoop/ambari agent machines.
I do not believe that the issue has been fully addressed by AMBARI-12837.
If a filesystem service is replacing HDFS, then it has to manage the core-site and hadoop-env configurations, and possibly hdfs-site as well. Many of these properties are defined in site_properties.js, but are defined as having a 'serviceName' of 'HDFS'. When it comes time for Ambari to display these properties in the UI, they get left out. This is true both on the customize services page of the installation wizard and on the Configs tab for the HCFS service.
On installation, anything that the HCFS service has defined in the stack in core-site.xml, hdfs-site.xml or hadoop-env.xml does get saved to the database, even if some of the properties do not appear on the Customize Services page.
However, if the admin tries to change any properties later (for example, the hadoop-env.sh content), then only those properties which are displayed by the UI, which are those properties that are NOT defined in site_properties.js, are saved.
The issue seems to be in these areas of the code, all in ambari-web/app/utils/config.js:
mergePredefinedWithSaved
mergePredefinedWithLoaded
serviceConfigUiAttributes
All 3 of these functions will set 'serviceName' in the serviceConfigObj to the serviceName from the pre-defined config properties in configsPropertyDef.
Attachments
Issue Links
- is a clone of
-
AMBARI-12851 The definition and hadling of hadoop-env should not be restrictive to HDFS
- Resolved