Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-3605

Block mistakenly marked corrupt during edit log catchup phase of failover

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.0-alpha
    • Fix Version/s: 2.0.2-alpha
    • Component/s: ha, namenode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Open file for append
      Write data and sync.
      After next log roll and editlog tailing in standbyNN close the append stream.
      Call append multiple times on the same file, before next editlog roll.
      Now abruptly kill the current active namenode.

      Here block is missed..

      this may be because of All latest blocks were queued in StandBy Namenode.
      During failover, first OP_CLOSE was processing the pending queue and adding the block to corrupted block.

      1. hdfs-3605.txt
        10 kB
        Todd Lipcon
      2. hdfs-3605.txt
        11 kB
        Todd Lipcon
      3. hdfs-3605.txt
        11 kB
        Todd Lipcon
      4. HDFS-3605.patch
        12 kB
        Uma Maheswara Rao G
      5. TestAppendBlockMiss.java
        4 kB
        Brahma Reddy Battula

        Activity

          People

          • Assignee:
            Todd Lipcon
            Reporter:
            Brahma Reddy Battula
          • Votes:
            0 Vote for this issue
            Watchers:
            11 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development