Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-1263

0.20: in tryUpdateBlock, the meta file is renamed away before genstamp validation is done

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Duplicate
    • Affects Version/s: 0.20-append
    • Fix Version/s: 0.20-append
    • Component/s: datanode
    • Labels:
      None

      Description

      Saw an issue where multiple datanodes are trying to recover at the same time, and all of them failed. I think the issue is in FSDataset.tryUpdateBlock, we do the rename of blk_B_OldGS to blk_B_OldGS_tmpNewGS and then check that the generation stamp is moving upwards. Because of this, invalid update block calls are blocked, but they then cause future updateBlock calls to fail with "Meta file not found" errors.

        Issue Links

          Activity

          Hide
          Todd Lipcon added a comment -

          Yep. Initially wasn't entirely clear they were the same, but I think you're right. And phew, very glad it was a simple fix

          Show
          Todd Lipcon added a comment - Yep. Initially wasn't entirely clear they were the same, but I think you're right. And phew, very glad it was a simple fix
          Hide
          dhruba borthakur added a comment -

          after reading through the description, it appears to me that this bug is similar to the one described in HDFS-1260. Shall we close this one as a duplicate?

          Show
          dhruba borthakur added a comment - after reading through the description, it appears to me that this bug is similar to the one described in HDFS-1260 . Shall we close this one as a duplicate?
          Hide
          Todd Lipcon added a comment -

          I think I actually fixed this with a patch in HDFS-1260 - the problems turned out to be the same. Can you take a look at that patch and let me know what you think?

          Show
          Todd Lipcon added a comment - I think I actually fixed this with a patch in HDFS-1260 - the problems turned out to be the same. Can you take a look at that patch and let me know what you think?
          Hide
          sam rash added a comment -

          so if i follow:

          checking that the genstamp of the file is < the one we are trying to update before doing any mutation of blocks or metadata (ie renaming) should fix this issue

          regarding throwing ioe on concurrent recovery in the same node, that might be problematic if:

          DN A can talk to DN B, not DN C
          DN B can talk to DN A and DN C

          DN A starts recovery first
          DN B starts after
          if DN B talks to DN A before DN A times out talking to C, we'll fail a recovery that could succeed, no?

          i like the idea of failing on these as early in the pipe, but i lean towards fixing the genstamp detection. seems like the whole genstamp process is designed for this--there's just a tiny bug with the rename

          Show
          sam rash added a comment - so if i follow: checking that the genstamp of the file is < the one we are trying to update before doing any mutation of blocks or metadata (ie renaming) should fix this issue regarding throwing ioe on concurrent recovery in the same node, that might be problematic if: DN A can talk to DN B, not DN C DN B can talk to DN A and DN C DN A starts recovery first DN B starts after if DN B talks to DN A before DN A times out talking to C, we'll fail a recovery that could succeed, no? i like the idea of failing on these as early in the pipe, but i lean towards fixing the genstamp detection. seems like the whole genstamp process is designed for this--there's just a tiny bug with the rename
          Hide
          Todd Lipcon added a comment -

          also, we check the genstamp is moving upwards both at the start of updateBlock and at the end of tryUpdateBlock. do you know why?

          Oops, missed this question - probably because synchronization blocks are dropped and reclaimed in every cycle of checking that no threads are writing the block. But we should certainly do the sanity check at the top of the function instead of after renaming the meta file

          Show
          Todd Lipcon added a comment - also, we check the genstamp is moving upwards both at the start of updateBlock and at the end of tryUpdateBlock. do you know why? Oops, missed this question - probably because synchronization blocks are dropped and reclaimed in every cycle of checking that no threads are writing the block. But we should certainly do the sanity check at the top of the function instead of after renaming the meta file
          Hide
          Todd Lipcon added a comment -

          can you also explain the error state that results? (truncated blocks, infinite loops, bad meta-data, etc)

          Yea, what happened is that we had three replicas, and this managed to happen on all three, somehow, due to multiple concurrent recoveries. It's tough to parse out from the logs, but I think basically it was the following sequence of events:

          1. DN A told to recover blk_B_1
          2. DN A gets block info from all DNs, and a new genstamp 2
          3. DN A gets disconnected from network, gets swapped out, whatever, for a minute
          4. NN times out the block recovery lease after 60s and another recovery is initiated by a client still calling appendFile()
          5. DN B is told to recover blk_B_1
          6. DN B starts to get block info from all nodes - this takes a while trying to talk to DN A because it's still paused
          7. DN A comes back to life
          8. DN B receives block info from A and asks for new genstamp (3)
          9. DN B wins the updateBlock race, and updates all replicas to genstamp 3
          10. DN A calls updateBlock on all replicas, asking to go genstamp 1 -> genstamp 2. This fails because cur genstamp is 3 on all replicas. In the process of failing, it effectively trashes the meta file by renaming it
          11.All further attempts to recover the block fail because all replicas get the "no meta file" error

          The above seems really contrived, but I'm pretty sure that's what I saw happen This JIRA deals with step 10 - a stale updateBlock call coming from some DN that had paused during recovery results in corrupting the replica (by removing meta)

          It's also suspicious that a node will allow recovery to start (the startBlockRecovery) call if it thinks it's already the primary DN for recovery on that block. To fix that, we could make startBlockRecovery throw an IOE if it finds the block in the ongoingRecovery map and the call is not coming from itself.

          Show
          Todd Lipcon added a comment - can you also explain the error state that results? (truncated blocks, infinite loops, bad meta-data, etc) Yea, what happened is that we had three replicas, and this managed to happen on all three, somehow, due to multiple concurrent recoveries. It's tough to parse out from the logs, but I think basically it was the following sequence of events: 1. DN A told to recover blk_B_1 2. DN A gets block info from all DNs, and a new genstamp 2 3. DN A gets disconnected from network, gets swapped out, whatever, for a minute 4. NN times out the block recovery lease after 60s and another recovery is initiated by a client still calling appendFile() 5. DN B is told to recover blk_B_1 6. DN B starts to get block info from all nodes - this takes a while trying to talk to DN A because it's still paused 7. DN A comes back to life 8. DN B receives block info from A and asks for new genstamp (3) 9. DN B wins the updateBlock race, and updates all replicas to genstamp 3 10. DN A calls updateBlock on all replicas, asking to go genstamp 1 -> genstamp 2. This fails because cur genstamp is 3 on all replicas. In the process of failing, it effectively trashes the meta file by renaming it 11.All further attempts to recover the block fail because all replicas get the "no meta file" error The above seems really contrived, but I'm pretty sure that's what I saw happen This JIRA deals with step 10 - a stale updateBlock call coming from some DN that had paused during recovery results in corrupting the replica (by removing meta) It's also suspicious that a node will allow recovery to start (the startBlockRecovery) call if it thinks it's already the primary DN for recovery on that block. To fix that, we could make startBlockRecovery throw an IOE if it finds the block in the ongoingRecovery map and the call is not coming from itself.
          Hide
          sam rash added a comment -

          several thoughts/comments:

          my reading of the code is that the temp file was to make the creating of a meta file that is both truncated and has the new genstamp and atomic operation on the filesystem. If we rename first and crash and recover, then how do we know that the truncation didn't finish? (without information from the NN or other node giving us a new length). If we truncate first, then we have effectively corrupted a block.

          can you also explain the error state that results? (truncated blocks, infinite loops, bad meta-data, etc)

          and do i follow that a client started 2 lease recoveries? or was this a client and a NN somehow? (ie, how were there concurrent recoveries of the same block)

          seems like extra synchronization in parts of updateBlock might help as well.

          also, we check the genstamp is moving upwards both at the start of updateBlock and at the end of tryUpdateBlock. do you know why?

          Show
          sam rash added a comment - several thoughts/comments: my reading of the code is that the temp file was to make the creating of a meta file that is both truncated and has the new genstamp and atomic operation on the filesystem. If we rename first and crash and recover, then how do we know that the truncation didn't finish? (without information from the NN or other node giving us a new length). If we truncate first, then we have effectively corrupted a block. can you also explain the error state that results? (truncated blocks, infinite loops, bad meta-data, etc) and do i follow that a client started 2 lease recoveries? or was this a client and a NN somehow? (ie, how were there concurrent recoveries of the same block) seems like extra synchronization in parts of updateBlock might help as well. also, we check the genstamp is moving upwards both at the start of updateBlock and at the end of tryUpdateBlock. do you know why?
          Hide
          Todd Lipcon added a comment -

          This is ancient code, so not sure of all the history here... but a couple things come to mind:

          1) we can verify the genstamp before we do anything to the files on disk... this is probably a good thing
          2) I don't quite get why we rename to a tmp file in the first place. We shouldn't have concurrent writers since we verified that at the top of the function, and it's synchronized so we won't get new ones. Moving to a tmp file in the middle just creates another failure scenario where the DN crashes in between the renames.

          Can we just rename directly from the old genstamp meta to the new genstamp?

          Show
          Todd Lipcon added a comment - This is ancient code, so not sure of all the history here... but a couple things come to mind: 1) we can verify the genstamp before we do anything to the files on disk... this is probably a good thing 2) I don't quite get why we rename to a tmp file in the first place. We shouldn't have concurrent writers since we verified that at the top of the function, and it's synchronized so we won't get new ones. Moving to a tmp file in the middle just creates another failure scenario where the DN crashes in between the renames. Can we just rename directly from the old genstamp meta to the new genstamp?

            People

            • Assignee:
              Todd Lipcon
              Reporter:
              Todd Lipcon
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development