Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-6088

Add configurable maximum block count for datanode

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Won't Fix
    • None
    • None
    • None
    • None

    Description

      Currently datanode resources are protected by the free space check and the balancer. But datanodes can run out of memory simply storing too many blocks. If the sizes of blocks are small, datanodes will appear to have plenty of space to put more blocks.

      I propose adding a configurable max block count to datanode. Since datanodes can have different heap configurations, it will make sense to make it datanode-level, rather than something enforced by namenode.

      Attachments

        Activity

          People

            kihwal Kihwal Lee
            kihwal Kihwal Lee
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: