Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-138

data node process should not die if one dir goes bad

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Duplicate
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      When multiple directories are configured for the data node process to use to store blocks, it currently exits when one of them is not writable. Instead, it should either completely ignore that directory or attempt to continue reading and then marking it unusable if reads fail.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                aw Allen Wittenauer
              • Votes:
                1 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: