Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-2305

Running multiple 2NNs can result in corrupt file system

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.20.2
    • 1.1.0
    • namenode
    • None
    • Reviewed

    Description

      Here's the scenario:

      • You run the NN and 2NN (2NN A) on the same machine.
      • You don't have the address of the 2NN configured, so it's defaulting to 127.0.0.1.
      • There's another 2NN (2NN B) running on a second machine.
      • When a 2NN is done checkpointing, it says "hey NN, I have an updated fsimage for you. You can download it from this URL, which includes my IP address, which is x"

      And here's the steps that occur to cause this issue:

      1. Some edits happen.
      2. 2NN A (on the NN machine) does a checkpoint. All is dandy.
      3. Some more edits happen.
      4. 2NN B (on a different machine) does a checkpoint. It tells the NN "grab the newly-merged fsimage file from 127.0.0.1"
      5. NN happily grabs the fsimage from 2NN A (the 2NN on the NN machine), which is stale.
      6. NN renames edits.new file to edits. At this point the in-memory FS state is fine, but the on-disk state is missing edits.
      7. The next time a 2NN (any 2NN) tries to do a checkpoint, it gets an up-to-date edits file, with an outdated fsimage, and tries to apply those edits to that fsimage.
      8. Kaboom.

      Attachments

        1. hdfs-2305.0.patch
          15 kB
          Aaron Myers
        2. hdfs-2305.1.patch
          16 kB
          Aaron Myers
        3. hdfs-2305-test.patch
          3 kB
          Aaron Myers

        Issue Links

          Activity

            People

              atm Aaron Myers
              atm Aaron Myers
              Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: