Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-138

data node process should not die if one dir goes bad

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Duplicate
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      When multiple directories are configured for the data node process to use to store blocks, it currently exits when one of them is not writable. Instead, it should either completely ignore that directory or attempt to continue reading and then marking it unusable if reads fail.

        Attachments

        Issue Links

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              aw Allen Wittenauer

              Dates

              • Created:
                Updated:
                Resolved:

                Issue deployment