Details
-
Improvement
-
Status: Patch Available
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
-
None
-
Incompatible change
-
Addressed the @atm's comments
Description
When snapshot diff operation is performed in a NameNode that manages several million HDFS files/directories, NN needs a lot of memory. Analyzing one heap dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays result in 6.5% memory overhead, and most of these arrays are referenced by org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name and org.apache.hadoop.hdfs.server.namenode.INodeFile.name:
19. DUPLICATE PRIMITIVE ARRAYS Types of duplicate objects: Ovhd Num objs Num unique objs Class name 3,220,272K (6.5%) 104749528 25760871 byte[] .... 1,841,485K (3.7%), 53194037 dup arrays (13158094 unique) 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...) ... and 45902395 more arrays, of which 13158084 are unique <-- org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode <-- {j.u.ArrayList} <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 elements) ... <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER 409,830K (0.8%), 13482787 dup arrays (13260241 unique) 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...) ... and 13479257 more arrays, of which 13260231 are unique <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 <-- j.l.Thread[] <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER ....
There are several other places in NameNode code which also produce duplicate byte[] arrays.
To eliminate this duplication and reclaim memory, we will need to write a small class similar to StringInterner, but designed specifically for byte[] arrays.