Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Won't Fix
-
None
-
None
-
None
Description
HADOOP-5257 added the ability for NameNode and DataNode to start and stop ServicePlugin implementations at NN/DN start/stop. However, this is insufficient integration for some common use cases.
We should add some functionality for Plugins to subscribe to events generated by the service they're plugging into. Some potential hook points are:
NameNode:
- new datanode registered
- datanode has died
- exception caught
- etc?
DataNode:
- startup
- initial registration with NN complete (this is important for
HADOOP-4707to sync up datanode.dnRegistration.name with the NN-side registration) - namenode reconnect
- some block transfer hooks?
- exception caught
I see two potential routes for implementation:
1) We make an enum for the types of hookpoints and have a general function in the ServicePlugin interface. Something like:
enum HookPoint { DN_STARTUP, DN_RECEIVED_NEW_BLOCK, DN_CAUGHT_EXCEPTION, ... } void runHook(HookPoint hp, Object value);
2) We make classes specific to each "pluggable" as was originally suggested in HADDOP-5257. Something like:
class DataNodePlugin {
void datanodeStarted() {}
void receivedNewBlock(block info, etc) {}
void caughtException(Exception e) {}
...
}
I personally prefer option (2) since we can ensure plugin API compatibility at compile-time, and we avoid an ugly switch statement in a runHook() function.
Interested to hear what people's thoughts are here.
Attachments
Attachments
Issue Links
- blocks
-
HDFS-417 Improvements to Hadoop Thrift bindings
- Resolved
- is depended upon by
-
HDFS-460 Expose NN and DN hooks to service plugins
- Resolved
- is related to
-
HADOOP-8832 backport serviceplugin to branch-1
- Closed
-
HADOOP-7821 Hadoop event notification system
- Open
- relates to
-
HDFS-417 Improvements to Hadoop Thrift bindings
- Resolved