Details
Description
In our HDFS cluster we observed that append operation can take as much as 10X write lock time than other write operations. By collecting flamegraph on the namenode (see attachment: append-flamegraph.png), we found that most of the append call is spent on getNumLiveDataNodes():
/** @return the number of live datanodes. */ public int getNumLiveDataNodes() { int numLive = 0; synchronized (this) { for(DatanodeDescriptor dn : datanodeMap.values()) { if (!isDatanodeDead(dn) ) { numLive++; } } } return numLive; }
this method synchronizes on the DatanodeManager which is particularly expensive in large clusters since datanodeMap is being modified in many places such as processing DN heartbeats.
For append operation, getNumLiveDataNodes() is invoked in isSufficientlyReplicated:
/** * Check if a block is replicated to at least the minimum replication. */ public boolean isSufficientlyReplicated(BlockInfo b) { // Compare against the lesser of the minReplication and number of live DNs. final int replication = Math.min(minReplication, getDatanodeManager().getNumLiveDataNodes()); return countNodes(b).liveReplicas() >= replication; }
The way that the replication is calculated is not very optimal, as it will call getNumLiveDataNodes() every time even though usually minReplication is much smaller than the latter.
Attachments
Attachments
Issue Links
- is related to
-
HDFS-14171 Performance improvement in Tailing EditLog
- Resolved