Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-74

hash blocks into dfs.data.dirs

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Duplicate
    • 0.2.0
    • 0.2.0
    • None
    • None
    • large clusters

    Description

      When dfs.data.dir has multiple values, we currently start a DataNode for each (all in the same JVM). Instead we should run a single DataNode that stores block files into the different directories. This will reduce the number of connections to the namenode. We cannot hash because different devices might be different amounts full. So the datanode will need to keep a table mapping from block id to file location, and add new blocks to less full devices.

      Attachments

        Issue Links

          Activity

            People

              shv Konstantin Shvachko
              cutting Doug Cutting
              Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: