Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-3277

fail over to loading a different FSImage if the first one we try to load is corrupt

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0
    • Fix Version/s: 2.1.0-beta
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Most users store multiple copies of the FSImage in order to prevent catastrophic data loss if a hard disk fails. However, our image loading code is currently not set up to start reading another FSImage if loading the first one does not succeed. We should add this capability.

      We should also be sure to remove the FSImage directory that failed from the list of FSImage directories to write to, in the way we normally do when a write (as opopsed to read) fails.

      1. HDFS-3277.006.patch
        30 kB
        Andrew Wang
      2. HDFS-3277.005.patch
        33 kB
        Andrew Wang
      3. HDFS-3277.004.patch
        32 kB
        Andrew Wang
      4. HDFS-3277.003.patch
        20 kB
        Colin Patrick McCabe
      5. HDFS-3277.002.patch
        20 kB
        Colin Patrick McCabe

        Issue Links

          Activity

          No work has yet been logged on this issue.

            People

            • Assignee:
              Andrew Wang
              Reporter:
              Colin Patrick McCabe
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development