HDFS-1703 changed the behavior of the start/stop scripts so that the masters file is no longer used to indicate which hosts to start the 2NN on. The 2NN is now started, when using start-dfs.sh, on hosts only when dfs.namenode.secondary.http-address is configured with a non-wildcard IP. This means you can not start a NN using an http-address specified using a wildcard IP. We should allow a 2NN to be started with the default config, ie start-dfs.sh should start a NN, 2NN and DN. The packaging already works this way (it doesn't use start-dfs.sh, it uses hadoop-daemon.sh directly w/o first checking getconf) so let's bring start-dfs.sh in line with this behavior.
|Transition||Time In Source Status||Execution Times||Last Executer||Last Execution Date|
|1d 9h 53m||1||Eli Collins||06/Feb/12 09:02|
|27d 17h 46m||1||Arun C Murthy||05/Mar/12 02:48|
|Assignee||Eli Collins [ eli2 ]||Eli Collins [ eli ]|
|Status||Resolved [ 5 ]||Closed [ 6 ]|
|Fix Version/s||0.23.1 [ 12318885 ]|
|Status||Open [ 1 ]||Resolved [ 5 ]|
|Hadoop Flags||Reviewed [ 10343 ]|
|Resolution||Fixed [ 1 ]|
|Summary||The start and stop scripts should start the 2NN when using the default configuration||The start/stop scripts don't start/stop the 2NN when using the default configuration|
|Summary||start-dfs.sh won't start the 2NN if dfs.namenode.secondary.http-address is default or specified with a wildcard IP||The start and stop scripts should start the 2NN when using the default configuration|
|Priority||Critical [ 2 ]||Minor [ 4 ]|
|Field||Original Value||New Value|
|Summary||The 2NN won't start if dfs.namenode.secondary.http-address is default or specified with a wildcard IP and port||start-dfs.sh won't start the 2NN if dfs.namenode.secondary.http-address is default or specified with a wildcard IP|
|Assignee||Eli Collins [ eli2 ]|
Looks like DFSUtil address matching doesn't find a match if the http-address is specified using a wildcard IP and a port. It should return 0.0.0.0:50090 in this case which would allow the 2NN to start.
Also, unless http-address is explicitly configured in hdfs-site.xml the 2NN will not start, since DFSUtil#getSecondaryNameNodeAddresses does not use the default value as a fallback. That may be confusing to people who expect the default value to be used.
hadoop-0.23.1-SNAPSHOT $ cat /home/eli/hadoop/conf3/hdfs-site.xml
hadoop-0.23.1-SNAPSHOT $ ./bin/hdfs --config ~/hadoop/conf3 getconf -secondarynamenodes
hadoop-0.23.1-SNAPSHOT $ ./sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/eli/hadoop/dirs3/logs/eli/hadoop-eli-namenode-eli-thinkpad.out
localhost: starting datanode, logging to /home/eli/hadoop/dirs3/logs/eli/hadoop-eli-datanode-eli-thinkpad.out
Secondary namenodes are not configured. Cannot start secondary namenodes.
This works if eg localhost:50090 is used.
We should also update the hdfs user guide to remove the reference to the masters file since it's no longer used to configure which hosts the 2NN runs on.