Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-265 Revisit append
  3. HDFS-550

DataNode restarts may introduce corrupt/duplicated/lost replicas when handling detached replicas

VotersWatch issueWatchersLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments


    • Type: Sub-task
    • Status: Resolved
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.21.0
    • Fix Version/s: Append Branch
    • Component/s: datanode
    • Labels:
    • Hadoop Flags:


      Current trunk first calls detach to unlinks a finalized replica before appending to this block. Unlink is done by temporally copying the block file in the "current" subtree to a directory called "detach" under the volume's daa directory and then copies it back when unlink succeeds. On datanode restarts, datanodes recover faied unlink by copying replicas under "detach" to "current".

      There are two bugs with this implementation:
      1. The "detach" directory does not include in a snapshot. so rollback will cause the "detaching" replicas to be lost.
      2. After a replica is copied to the "detach" directory, the information of its original location is lost. The current implementation erroneously assumes that the replica to be unlinked is under "current". This will make two instances of replicas with the same block id to coexist in a datanode. Also if a replica under "detach" is corrupt, the corrupt replica is moved to "current" without being detected, polluting datanode data.


        1. detach.patch
          21 kB
          Hairong Kuang
        2. detach1.patch
          22 kB
          Hairong Kuang
        3. detach2.patch
          23 kB
          Hairong Kuang



            • Assignee:
              hairong Hairong Kuang
              hairong Hairong Kuang


              • Created:

                Issue deployment