Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-1623 High Availability Framework for HDFS NN
  3. HDFS-2824

HA: failover does not succeed if prior NN died just after creating an edit log segment

VotersWatch issueWatchersLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • HA branch (HDFS-1623)
    • HA branch (HDFS-1623)
    • ha, namenode
    • None

    Description

      In stress testing failover, I had the following failure:

      • NN1 rolls edit logs and starts writing edits_inprogress_1000
      • NN1 crashes before writing the START_LOG_SEGMENT transaction
      • NN2 tries to become active, and calls recoverUnfinalizedSegment. Since the log file contains no valid transactions, it is marked as corrupt and renamed with the .corrupt suffix
      • The sanity check in openLogsForWrite will refuse to open a new in-progress log at the same txid. Failover does not proceed.

      Attachments

        1. HDFS-2824-HDFS-1623.patch
          27 kB
          Aaron Myers
        2. HDFS-2824-HDFS-1623.patch
          26 kB
          Aaron Myers

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            atm Aaron Myers
            tlipcon Todd Lipcon
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment