Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12051

Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly those denoting file/directory names) to save memory

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Patch Available
    • Major
    • Resolution: Unresolved
    • None
    • None
    • None
    • None
    • Incompatible change
    • Addressed the @atm's comments

    Description

      When snapshot diff operation is performed in a NameNode that manages several million HDFS files/directories, NN needs a lot of memory. Analyzing one heap dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays result in 6.5% memory overhead, and most of these arrays are referenced by org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name and org.apache.hadoop.hdfs.server.namenode.INodeFile.name:

      19. DUPLICATE PRIMITIVE ARRAYS
      
      Types of duplicate objects:
           Ovhd         Num objs  Num unique objs   Class name
      
      3,220,272K (6.5%)   104749528      25760871         byte[]
      ....
      
        1,841,485K (3.7%), 53194037 dup arrays (13158094 unique)
      3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...)
      ... and 45902395 more arrays, of which 13158084 are unique
           <-- org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode <--  {j.u.ArrayList} <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 elements) ... <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
      
        409,830K (0.8%), 13482787 dup arrays (13260241 unique)
      430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...)
      ... and 13479257 more arrays, of which 13260231 are unique
           <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 <-- j.l.Thread[] <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
      ....
      

      There are several other places in NameNode code which also produce duplicate byte[] arrays.

      To eliminate this duplication and reclaim memory, we will need to write a small class similar to StringInterner, but designed specifically for byte[] arrays.

      Attachments

        1. HDFS-12051.12.patch
          31 kB
          Misha Dmitriev
        2. HDFS-12051.11.patch
          28 kB
          Misha Dmitriev
        3. HDFS-12051.10.patch
          28 kB
          Misha Dmitriev
        4. HDFS-12051-NameCache-Rewrite.pdf
          152 kB
          Misha Dmitriev
        5. HDFS-12051.09.patch
          28 kB
          Misha Dmitriev
        6. HDFS-12051.08.patch
          27 kB
          Misha Dmitriev
        7. HDFS-12051.07.patch
          26 kB
          Misha Dmitriev
        8. HDFS-12051.06.patch
          26 kB
          Misha Dmitriev
        9. HDFS-12051.05.patch
          26 kB
          Misha Dmitriev
        10. HDFS-12051.04.patch
          26 kB
          Misha Dmitriev
        11. HDFS-12051.03.patch
          20 kB
          Misha Dmitriev
        12. HDFS-12051.02.patch
          18 kB
          Misha Dmitriev
        13. HDFS-12051.01.patch
          15 kB
          Misha Dmitriev

        Activity

          People

            misha@cloudera.com Misha Dmitriev
            misha@cloudera.com Misha Dmitriev
            Votes:
            0 Vote for this issue
            Watchers:
            14 Start watching this issue

            Dates

              Created:
              Updated: