Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-138

data node process should not die if one dir goes bad

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • None
    • None
    • None
    • None

    Description

      When multiple directories are configured for the data node process to use to store blocks, it currently exits when one of them is not writable. Instead, it should either completely ignore that directory or attempt to continue reading and then marking it unusable if reads fail.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              aw Allen Wittenauer
              Votes:
              1 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: