Hadoop Common
  1. Hadoop Common
  2. HADOOP-50

dfs datanode should store blocks in multiple directories

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.2.0
    • Fix Version/s: 0.6.0
    • Component/s: None
    • Labels:
      None

      Description

      The datanode currently stores all file blocks in a single directory. With 32MB blocks and terabyte filesystems, this will create too many files in a single directory for many filesystems. Thus blocks should be stored in multiple directories, perhaps even a shallow hierarchy.

      1. hadoop.50.patch.1
        13 kB
        Mike Cafarella

        Issue Links

          Activity

          No work has yet been logged on this issue.

            People

            • Assignee:
              Milind Bhandarkar
              Reporter:
              Doug Cutting
            • Votes:
              1 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development