Description
We've seen folks who have been given Hadoop configuration to act as a client accidentally type "hadoop namenode" and get things into a confused, or incorrect state. Most recently, we've seen data corruption when users accidentally run extra secondary namenodes (https://issues.apache.org/jira/browse/HDFS-2305).
I'd like to propose that we introduce a configuration property, say, "client.poison.servers", which, if set, disables the Hadoop daemons (nn, snn, jt, tt, etc.) with a reasonable error message. Hadoop administrators can hand out/install configs that are on machines intended to just be clients with a little less worry that they'll accidentally get run.
Attachments
Issue Links
- is related to
-
HADOOP-6590 Add a username check for hadoop sub-commands
- Resolved