Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-50

dfs datanode should store blocks in multiple directories

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 0.2.0
    • Fix Version/s: 0.6.0
    • Component/s: None
    • Labels:
      None

      Description

      The datanode currently stores all file blocks in a single directory. With 32MB blocks and terabyte filesystems, this will create too many files in a single directory for many filesystems. Thus blocks should be stored in multiple directories, perhaps even a shallow hierarchy.

        Attachments

        1. hadoop.50.patch.1
          13 kB
          Mike Cafarella

          Issue Links

            Activity

              People

              • Assignee:
                milindb Milind Bhandarkar
                Reporter:
                cutting Doug Cutting
              • Votes:
                1 Vote for this issue
                Watchers:
                0 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: