Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12107

FsDatasetImpl#removeVolumes floods the logs when removing the volume

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0-alpha4
    • Fix Version/s: 3.0.0-beta1
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      FsDatasetImpl#removeVolumes() prints all block ids on a volume when removing it, which floods the log of DN.

      for (String bpid : volumeMap.getBlockPoolList()) {
                  List<ReplicaInfo> blocks = new ArrayList<>();
                  for (Iterator<ReplicaInfo> it =
                        volumeMap.replicas(bpid).iterator(); it.hasNext();) {
                    ReplicaInfo block = it.next();
                    final StorageLocation blockStorageLocation =
                        block.getVolume().getStorageLocation();
                    LOG.info("checking for block " + block.getBlockId() +
                        " with storageLocation " + blockStorageLocation);
                    if (blockStorageLocation.equals(sdLocation)) {
                      blocks.add(block);
                      it.remove();
                    }
                  }
      

      The logging level should be DEBUG or TRACE instead of INFO.

        Attachments

          Activity

            People

            • Assignee:
              kelvinchu Kelvin Chu
              Reporter:
              wheat9 Haohui Mai
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: