Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12107

FsDatasetImpl#removeVolumes floods the logs when removing the volume

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskConvert to sub-taskMoveLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.0.0-alpha4
    • 3.0.0-beta1
    • None
    • None
    • Reviewed

    Description

      FsDatasetImpl#removeVolumes() prints all block ids on a volume when removing it, which floods the log of DN.

      for (String bpid : volumeMap.getBlockPoolList()) {
                  List<ReplicaInfo> blocks = new ArrayList<>();
                  for (Iterator<ReplicaInfo> it =
                        volumeMap.replicas(bpid).iterator(); it.hasNext();) {
                    ReplicaInfo block = it.next();
                    final StorageLocation blockStorageLocation =
                        block.getVolume().getStorageLocation();
                    LOG.info("checking for block " + block.getBlockId() +
                        " with storageLocation " + blockStorageLocation);
                    if (blockStorageLocation.equals(sdLocation)) {
                      blocks.add(block);
                      it.remove();
                    }
                  }
      

      The logging level should be DEBUG or TRACE instead of INFO.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            kelvinchu Kelvin Chu Assign to me
            wheat9 Haohui Mai
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Issue deployment