Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-3277

fail over to loading a different FSImage if the first one we try to load is corrupt

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0-alpha1
    • Fix Version/s: 2.1.0-beta
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Most users store multiple copies of the FSImage in order to prevent catastrophic data loss if a hard disk fails. However, our image loading code is currently not set up to start reading another FSImage if loading the first one does not succeed. We should add this capability.

      We should also be sure to remove the FSImage directory that failed from the list of FSImage directories to write to, in the way we normally do when a write (as opopsed to read) fails.

        Attachments

        1. HDFS-3277.006.patch
          30 kB
          Andrew Wang
        2. HDFS-3277.005.patch
          33 kB
          Andrew Wang
        3. HDFS-3277.004.patch
          32 kB
          Andrew Wang
        4. HDFS-3277.003.patch
          20 kB
          Colin P. McCabe
        5. HDFS-3277.002.patch
          20 kB
          Colin P. McCabe

          Issue Links

            Activity

              People

              • Assignee:
                andrew.wang Andrew Wang
                Reporter:
                cmccabe Colin P. McCabe
              • Votes:
                0 Vote for this issue
                Watchers:
                7 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: