Description
Right now changing the default file system does not work with the HDP 2.0.6+ stacks. Given that it might be common to run HDP against some other file system in the cloud, adding support for this will be super useful. One alternative is to consider a separate stack definition for other file systems, however, given that I noticed just 2 minor bugs needed to support this, I would rather extend on the existing code.
Bugs:
- One issue is in Nagios install scripts, where it is assumed that fs.defaultFS has the namenode port number.
- Another issue is in HDFS install scripts, where hadoop dfsadmin command only works when hdfs is the default file system.
Fix for both places is to extract the namenode address/port from dfs.namenode.rpc-address if one is defined and use it instead of relying on fs.defaultFS.
Haven't included any tests yet (my first Ambari patch, not sure what is appropriate, so please comment).
Attachments
Attachments
Issue Links
- links to