Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-4630

Datanode is going OOM due to small files in hdfs

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Blocker Blocker
    • Resolution: Invalid
    • Affects Version/s: 2.0.0-alpha
    • Fix Version/s: None
    • Component/s: datanode, namenode
    • Labels:
      None
    • Environment:

      Ubuntu, Java 1.6

      Description

      Hi,

      We have very small files(size ranging 10KB-1MB) in our hdfs and no of files are in tens of millions. Due to this namenode and datanode both going out of memory very frequently. When we analyse the head dump of datanode most of the memory was used by ReplicaMap.

      Can we use EhCache or other to not to store all the data in memory?

      Thanks
      Ankush

        Activity

        Harsh J made changes -
        Status Reopened [ 4 ] Resolved [ 5 ]
        Resolution Invalid [ 6 ]
        Ankush Bhatiya made changes -
        Resolution Invalid [ 6 ]
        Status Resolved [ 5 ] Reopened [ 4 ]
        Suresh Srinivas made changes -
        Field Original Value New Value
        Status Open [ 1 ] Resolved [ 5 ]
        Resolution Invalid [ 6 ]
        Ankush Bhatiya created issue -

          People

          • Assignee:
            Unassigned
            Reporter:
            Ankush Bhatiya
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development