Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-15621

Datanode DirectoryScanner uses excessive memory



    • Type: Bug
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 3.4.0
    • Fix Version/s: None
    • Component/s: datanode
    • Labels:
    • Target Version/s:


      We generally work a rule of 1GB heap on a datanode per 1M blocks. For nodes with a lot of blocks, this can mean a lot of heap.

      We recently captured a heapdump of a DN with about 22M blocks and found only about 1.5GB was occupied by the ReplicaMap. Another 9GB of the heap is taken by the DirectoryScanner ScanInfo objects. Most of this memory was alloated to strings.

      Checking the strings in question, we can see two strings per scanInfo, looking like:


      I will update a screen shot from MAT showing this.

      For the first string especially, the part "/current/BP-671271071-" will be the same for every block in the block pool as the scanner is only concerned about finalized blocks.

      We can probably also store just the subdir indexes "28" and "27" rather than "subdir28/subdir17" and then construct the path when it is requested via the getter.


        1. Screenshot 2020-10-09 at 15.20.56.png
          110 kB
          Stephen O'Donnell
        2. Screenshot 2020-10-09 at 14.11.36.png
          259 kB
          Stephen O'Donnell

          Issue Links



              • Assignee:
                sodonnell Stephen O'Donnell
                sodonnell Stephen O'Donnell
              • Votes:
                0 Vote for this issue
                7 Start watching this issue


                • Created: