Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
0.17.0
-
None
-
None
-
Reviewed
Description
In HADOOP-3633, namenode was assigning some datanodes to receive hundreds of blocks in a short period which caused datanodes to go out of memroy(threads).
Most of them were from remote rack.
Looking at the code,
166 chooseLocalRack(results.get(1), excludedNodes, blocksize, 167 maxNodesPerRack, results);
was sometimes not choosing the local rack of the writer(source).
As a result, when a datanode goes down, other datanodes on the same rack were getting large number of blocks from remote racks.
Attachments
Attachments
Issue Links
- relates to
-
HADOOP-3633 Uncaught exception in DataXceiveServer
- Closed