Description
The configuration system is easy to misconfigure and I think we need to strongly divide the server from client configs.
An example of the problem was a configuration where the task tracker has a hadoop-site.xml that set mapred.reduce.tasks to 1. Therefore, the job tracker had the right number of reduces, but the map task thought there was a single reduce. This lead to a hard to find diagnose failure.
Therefore, I propose separating out the configuration types as:
class Configuration;
// reads site-default.xml, hadoop-default.xml
class ServerConf extends Configuration;
// reads hadoop-server.xml, $super
class DfsServerConf extends ServerConf;
// reads dfs-server.xml, $super
class MapRedServerConf extends ServerConf;
// reads mapred-server.xml, $super
class ClientConf extends Configuration;
// reads hadoop-client.xml, $super
class JobConf extends ClientConf;
// reads job.xml, $super
Note in particular, that nothing corresponds to hadoop-site.xml, which overrides both client and server configs. Furthermore, the properties from the *-default.xml files should never be saved into the job.xml.
Attachments
Attachments
Issue Links
- blocks
-
HADOOP-1843 Remove deprecated code in Configuration/JobConf
- Closed
- relates to
-
HADOOP-1881 Update documentation for hadoop's configuration post HADOOP-785
- Resolved