Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-1623 High Availability Framework for HDFS NN
  3. HDFS-2824

HA: failover does not succeed if prior NN died just after creating an edit log segment

    XMLWordPrintableJSON

    Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: HA branch (HDFS-1623)
    • Fix Version/s: HA branch (HDFS-1623)
    • Component/s: ha, namenode
    • Labels:
      None

      Description

      In stress testing failover, I had the following failure:

      • NN1 rolls edit logs and starts writing edits_inprogress_1000
      • NN1 crashes before writing the START_LOG_SEGMENT transaction
      • NN2 tries to become active, and calls recoverUnfinalizedSegment. Since the log file contains no valid transactions, it is marked as corrupt and renamed with the .corrupt suffix
      • The sanity check in openLogsForWrite will refuse to open a new in-progress log at the same txid. Failover does not proceed.

        Attachments

        1. HDFS-2824-HDFS-1623.patch
          27 kB
          Aaron Myers
        2. HDFS-2824-HDFS-1623.patch
          26 kB
          Aaron Myers

          Issue Links

            Activity

              People

              • Assignee:
                atm Aaron Myers
                Reporter:
                tlipcon Todd Lipcon
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: