Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-8884

Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 2.8.0, 3.0.0-alpha1
    • None
    • None
    • Reviewed

    Description

      In current BlockPlacementPolicyDefault, when choosing datanode storage to place block, we have following logic:

              final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
                  chosenNode.getStorageInfos());
              int i = 0;
              boolean search = true;
              for (Iterator<Map.Entry<StorageType, Integer>> iter = storageTypes
                  .entrySet().iterator(); search && iter.hasNext(); ) {
                Map.Entry<StorageType, Integer> entry = iter.next();
                for (i = 0; i < storages.length; i++) {
                  StorageType type = entry.getKey();
                  final int newExcludedNodes = addIfIsGoodTarget(storages[i],
      

      We will iterate (actually two for, although they are usually small value) all storages of the candidate datanode even the datanode itself is not good (e.g. decommissioned, stale, too busy..), since currently we do all the check in addIfIsGoodTarget.

      We can fail-fast: check the datanode related conditions first, if the datanode is not good, then no need to shuffle and iterate the storages. Then it's more efficient.

      Attachments

        1. HDFS-8884.002.patch
          18 kB
          Yi Liu
        2. HDFS-8884.001.patch
          18 kB
          Yi Liu

        Issue Links

          Activity

            People

              hitliuyi Yi Liu
              hitliuyi Yi Liu
              Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: