Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12107

FsDatasetImpl#removeVolumes floods the logs when removing the volume

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.0.0-alpha4
    • 3.0.0-beta1
    • None
    • None
    • Reviewed

    Description

      FsDatasetImpl#removeVolumes() prints all block ids on a volume when removing it, which floods the log of DN.

      for (String bpid : volumeMap.getBlockPoolList()) {
                  List<ReplicaInfo> blocks = new ArrayList<>();
                  for (Iterator<ReplicaInfo> it =
                        volumeMap.replicas(bpid).iterator(); it.hasNext();) {
                    ReplicaInfo block = it.next();
                    final StorageLocation blockStorageLocation =
                        block.getVolume().getStorageLocation();
                    LOG.info("checking for block " + block.getBlockId() +
                        " with storageLocation " + blockStorageLocation);
                    if (blockStorageLocation.equals(sdLocation)) {
                      blocks.add(block);
                      it.remove();
                    }
                  }
      

      The logging level should be DEBUG or TRACE instead of INFO.

      Attachments

        1. HDFS-12107.001.patch
          1 kB
          Kelvin Chu

        Activity

          People

            kelvinchu Kelvin Chu
            wheat9 Haohui Mai
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: