Details
Description
I am seeing that when we delete a large directory that has plenty of blocks, the heartbeat times from datanodes increase significantly from the normal value of 3 seconds to as large as 50 seconds or so. The heartbeat thread in the Datanode deletes a bunch of blocks sequentially, this causes the heartbeat times to increase.
Attachments
Attachments
Issue Links
- is blocked by
-
HADOOP-6433 Add AsyncDiskService that is used in both hdfs and mapreduce
- Closed
- is related to
-
HADOOP-774 Datanodes fails to heartbeat when a directory with a large number of blocks is deleted
- Closed
-
HADOOP-994 DFS Scalability : a BlockReport that returns large number of blocks-to-be-deleted cause datanode to lost connectivity to namenode
- Closed
- relates to
-
MAPREDUCE-1213 TaskTrackers restart is very slow because it deletes distributed cache directory synchronously
- Closed