Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-50

dfs datanode should store blocks in multiple directories

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.2.0
    • 0.6.0
    • None
    • None

    Description

      The datanode currently stores all file blocks in a single directory. With 32MB blocks and terabyte filesystems, this will create too many files in a single directory for many filesystems. Thus blocks should be stored in multiple directories, perhaps even a shallow hierarchy.

      Attachments

        1. hadoop.50.patch.1
          13 kB
          Mike Cafarella

        Issue Links

          Activity

            People

              milindb Milind Barve
              cutting Doug Cutting
              Votes:
              1 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: