Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-296

Do not assign blocks to a datanode with < x mb free

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.3.2
    • 0.4.0
    • None
    • None

    Description

      We're running a smallish cluster with very different machines, some with only 60 gb harddrives
      This creates a problem when inserting files into the dfs, these machines run out of space quickly and then they cannot run any map reduce operations

      A solution would be to not assign any new blocks once the space is below a certain user configurable threshold
      This free space could then be used by the map reduce operations instead (if that's on the same disk)

      Attachments

        1. minspacev5.patch
          2 kB
          Johan Oskarsson
        2. minspacev4.patch
          2 kB
          Johan Oskarsson
        3. minspacev3.patch
          3 kB
          Johan Oskarsson
        4. minspacev3.patch
          3 kB
          Johan Oskarsson
        5. minspacev2.patch
          1 kB
          Johan Oskarsson
        6. minspace.patch
          2 kB
          Johan Oskarsson

        Activity

          People

            johanoskarsson Johan Oskarsson
            johanoskarsson Johan Oskarsson
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: