Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-8988

Use LightWeightHashSet instead of LightWeightLinkedSet in BlockManager#excessReplicateMap

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 2.8.0, 3.0.0-alpha1
    • None
    • None
    • Reviewed

    Description

      public final Map<String, LightWeightLinkedSet<Block>> excessReplicateMap = new HashMap<>();
      

      LightWeightLinkedSet extends LightWeightHashSet and in addition it stores elements in double linked list to ensure ordered traversal. So it requires more memory for each entry (2 references = 8 + 8 bytes = 16 bytes, assume 64-bits system/JVM).
      I have traversed the source code, and we don't need ordered traversal for excess replicated blocks, so could use LightWeightHashSet to save memory.

      Attachments

        1. HDFS-8988.001.patch
          6 kB
          Yi Liu
        2. HDFS-8988.002.patch
          6 kB
          Yi Liu

        Activity

          People

            hitliuyi Yi Liu
            hitliuyi Yi Liu
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: