Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
0.2.0
-
None
-
None
Description
Currently dfs disallows more one datanode to run on the same computer if they are started up using the same hadoop conf dir. However, this does not prevent more than one data node gets started, each using a different conf dir (strickly speaking, a different pid file). If every machine has two such datanodes running, namenode will be busy on deleting and replicating blocks or eventually lead to block loss.
Suggested solution: put pid file in the data directory and disallow configuration.
Attachments
Issue Links
- is duplicated by
-
HADOOP-124 don't permit two datanodes to run from same dfs.data.dir
- Closed