Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-11229

HDFS-11056 failed to close meta file

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Blocker
    • Resolution: Fixed
    • 2.7.4, 3.0.0-alpha2
    • 2.8.0, 2.7.4, 3.0.0-alpha2
    • datanode
    • None
    • Hide
      The fix for HDFS-11056 reads meta file to load last partial chunk checksum when a block is converted from finalized/temporary to rbw. However, it did not close the file explicitly, which may cause number of open files reaching system limit. This jira fixes it by closing the file explicitly after the meta file is read.
      Show
      The fix for HDFS-11056 reads meta file to load last partial chunk checksum when a block is converted from finalized/temporary to rbw. However, it did not close the file explicitly, which may cause number of open files reaching system limit. This jira fixes it by closing the file explicitly after the meta file is read.

    Description

      The following code failed to close the file after it is read.

      FsVolumeImpl#loadLastPartialChunkChecksum
          RandomAccessFile raf = new RandomAccessFile(metaFile, "r");
          raf.seek(offsetInChecksum);
          raf.read(lastChecksum, 0, checksumSize);
          return lastChecksum;
      

      This must be fixed because every append operation uses this piece of code. Without an explicit close, open files can reach system limit before RandomAccessFile objects are garbage collected.

      Attachments

        1. HDFS-11229.001.patch
          1 kB
          Wei-Chiu Chuang
        2. HDFS-11229.branch-2.patch
          1 kB
          Wei-Chiu Chuang

        Issue Links

          Activity

            People

              weichiu Wei-Chiu Chuang
              weichiu Wei-Chiu Chuang
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: