Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-4128

2NN gets stuck in inconsistent state if edit log replay fails in the middle

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 2.0.2-alpha
    • 0.23.7, 2.1.0-beta
    • namenode
    • None

    Description

      We saw the following issue in a cluster:

      • The 2NN downloads an edit log segment:
        2012-10-29 12:30:57,433 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading /xxxxxxx/current/edits_0000000000049136809-0000000000049176162 expecting start txid #49136809
        
      • It fails in the middle of replay due to an OOME:
        2012-10-29 12:31:21,021 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception on operation AddOp [length=0, path=/xxxxxxxx
        java.lang.OutOfMemoryError: Java heap space
        
      • Future checkpoints then fail because the prior edit log replay only got halfway through the stream:
        2012-10-29 12:32:21,214 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading /xxxxx/current/edits_0000000000049176163-0000000000049177224 expecting start txid #49144432
        2012-10-29 12:32:21,216 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
        java.io.IOException: There appears to be a gap in the edit log.  We expected txid 49144432, but got txid 49176163.
        

      Attachments

        1. hdfs-4128.patch
          12 kB
          Kihwal Lee
        2. hdfs-4128.patch
          12 kB
          Kihwal Lee
        3. hdfs-4128.b023.patch
          11 kB
          Kihwal Lee

        Issue Links

          Activity

            People

              kihwal Kihwal Lee
              tlipcon Todd Lipcon
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: