Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-8988

Use LightWeightHashSet instead of LightWeightLinkedSet in BlockManager#excessReplicateMap

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: None
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      public final Map<String, LightWeightLinkedSet<Block>> excessReplicateMap = new HashMap<>();
      

      LightWeightLinkedSet extends LightWeightHashSet and in addition it stores elements in double linked list to ensure ordered traversal. So it requires more memory for each entry (2 references = 8 + 8 bytes = 16 bytes, assume 64-bits system/JVM).
      I have traversed the source code, and we don't need ordered traversal for excess replicated blocks, so could use LightWeightHashSet to save memory.

        Attachments

        1. HDFS-8988.002.patch
          6 kB
          Yi Liu
        2. HDFS-8988.001.patch
          6 kB
          Yi Liu

          Activity

            People

            • Assignee:
              hitliuyi Yi Liu
              Reporter:
              hitliuyi Yi Liu
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: