Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-2893

The start/stop scripts don't start/stop the 2NN when using the default configuration

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 0.23.1
    • Fix Version/s: 0.23.1
    • Component/s: None
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      HDFS-1703 changed the behavior of the start/stop scripts so that the masters file is no longer used to indicate which hosts to start the 2NN on. The 2NN is now started, when using start-dfs.sh, on hosts only when dfs.namenode.secondary.http-address is configured with a non-wildcard IP. This means you can not start a NN using an http-address specified using a wildcard IP. We should allow a 2NN to be started with the default config, ie start-dfs.sh should start a NN, 2NN and DN. The packaging already works this way (it doesn't use start-dfs.sh, it uses hadoop-daemon.sh directly w/o first checking getconf) so let's bring start-dfs.sh in line with this behavior.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                eli Eli Collins
                Reporter:
                eli2 Eli Collins
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: