Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
-
Reviewed
Description
In current BlockPlacementPolicyDefault, when choosing datanode storage to place block, we have following logic:
final DatanodeStorageInfo[] storages = DFSUtil.shuffle( chosenNode.getStorageInfos()); int i = 0; boolean search = true; for (Iterator<Map.Entry<StorageType, Integer>> iter = storageTypes .entrySet().iterator(); search && iter.hasNext(); ) { Map.Entry<StorageType, Integer> entry = iter.next(); for (i = 0; i < storages.length; i++) { StorageType type = entry.getKey(); final int newExcludedNodes = addIfIsGoodTarget(storages[i],
We will iterate (actually two for, although they are usually small value) all storages of the candidate datanode even the datanode itself is not good (e.g. decommissioned, stale, too busy..), since currently we do all the check in addIfIsGoodTarget.
We can fail-fast: check the datanode related conditions first, if the datanode is not good, then no need to shuffle and iterate the storages. Then it's more efficient.
Attachments
Attachments
Issue Links
- is related to
-
HDFS-8946 Improve choosing datanode storage for block placement
- Resolved