Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-13246

FileInputStream redundant closes in readReplicasFromCache

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 3.2.0
    • 3.1.0
    • datanode
    • None
    • Reviewed

    Description

      When i read the readReplicasFromCache() of BlockPoolSlice class in datanode, I found the following code closes fileinputstream redundant, I think  IOUtils.closeStream(inputStream) in finally code block could guarantee close the inputStream correctly, So the

      inputStream.close() can remove. Thanks.

       

      BlockPoolSlice.java
      FileInputStream inputStream = null;
          try {
            inputStream = fileIoProvider.getFileInputStream(volume, replicaFile);
            BlockListAsLongs blocksList =
                BlockListAsLongs.readFrom(inputStream, maxDataLength);
            if (blocksList == null) {
              return false;
            }
      
            for (BlockReportReplica replica : blocksList) {
              switch (replica.getState()) {
              case FINALIZED:
                addReplicaToReplicasMap(replica, tmpReplicaMap, lazyWriteReplicaMap, true);
                break;
              case RUR:
              case RBW:
              case RWR:
                addReplicaToReplicasMap(replica, tmpReplicaMap, lazyWriteReplicaMap, false);
                break;
              default:
                break;
              }
            }
            inputStream.close();
      
            // Now it is safe to add the replica into volumeMap
            // In case of any exception during parsing this cache file, fall back
            // to scan all the files on disk.
            for (Iterator<ReplicaInfo> iter =
                tmpReplicaMap.replicas(bpid).iterator(); iter.hasNext(); ) {
              ReplicaInfo info = iter.next();
              // We use a lightweight GSet to store replicaInfo, we need to remove
              // it from one GSet before adding to another.
              iter.remove();
              volumeMap.add(bpid, info);
            }
            LOG.info("Successfully read replica from cache file : "
                + replicaFile.getPath());
            return true;
          } catch (Exception e) {
            // Any exception we need to revert back to read from disk
            // Log the error and return false
            LOG.info("Exception occurred while reading the replicas cache file: "
                + replicaFile.getPath(), e );
            return false;
          }
          finally {
            if (!fileIoProvider.delete(volume, replicaFile)) {
              LOG.info("Failed to delete replica cache file: " +
                  replicaFile.getPath());
            }
      
            // close the inputStream
            IOUtils.closeStream(inputStream);
          }
      
      

      Attachments

        1. HDFS-13246.001.patch
          1 kB
          liaoyuxiangqin

        Activity

          People

            liaoyuxiangqin liaoyuxiangqin
            liaoyuxiangqin liaoyuxiangqin
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: