Details
-
Improvement
-
Status: Open
-
Minor
-
Resolution: Unresolved
-
None
-
None
-
None
Description
I just managed to blow away a DFS by accidentally misconfiguring dfs.data.dir and mapred.local.dir to contain several directories in common. On startup, the task tracker clears out mapred.local.dir, which obviously screws over the datanode.
This should be a reasonably easy check to put in place to make hadoop more idiot proof (or in my case lack-of-sleep proof)