Details
-
New Feature
-
Status: Closed
-
Minor
-
Resolution: Fixed
-
None
-
None
-
None
-
Reviewed
-
New plugin facility for namenode and datanode instantiates classes named in new configuration properties dfs.datanode.plugins and dfs.namenode.plugins.
Description
Adding support for pluggable components would allow exporting DFS functionallity using arbitrary protocols, like Thirft or Protocol Buffers. I'm opening this issue on Dhruba's suggestion in HADOOP-4707.
Plug-in implementations would extend this base class:
abstract class Plugin { public abstract datanodeStarted(DataNode datanode); public abstract datanodeStopping(); public abstract namenodeStarted(NameNode namenode); public abstract namenodeStopping(); }
Name node instances would then start the plug-ins according to a configuration object, and would also shut them down when the node goes down:
public class NameNode { // [..] private void initialize(Configuration conf) // [...] for (Plugin p: PluginManager.loadPlugins(conf)) p.namenodeStarted(this); } // [..] public void stop() { if (stopRequested) return; stopRequested = true; for (Plugin p: plugins) p.namenodeStopping(); // [..] } // [..] }
Data nodes would do a similar thing in DataNode.startDatanode() and DataNode.shutdown
Attachments
Attachments
Issue Links
- blocks
-
HDFS-417 Improvements to Hadoop Thrift bindings
- Resolved
- is related to
-
HADOOP-8832 backport serviceplugin to branch-1
- Closed
-
HDFS-3963 backport namenode/datanode serviceplugin to branch-1
- Closed
- relates to
-
HDFS-217 Need an FTP Server implementation over HDFS
- Open
-
MAPREDUCE-461 Enable ServicePlugins for the JobTracker
- Closed