Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-15209

Lease recovery: namenode not able to commitBlockSynchronization if client comes back and closes the file beforehand

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 3.1.2, 3.1.3
    • None
    • namenode
    • None

    Description

      We observed a case, client closes the file after soft lease recovery already started but before namenode commitBlockSynchronization.

      This leads to commitBlockSynchronization failure with error below, which requires either the file isn't closed or the last block isn't in complete state.

      As a result, we will have corrupted replicas by genstamp mismatch, since datanodes may have finished block recovery with a new genstamp.

      This could happen when client delays a lot on write and comes back when lease recovery already happens by write/append/truncate request from other client.

      Repro steps:

      1. Client #1 finishes writing a file, but hasn't closed yet.
      2. Client #1 doesn't renew lease for a soft lease period.
      3. Another client #2 appends the same file.
      4. Soft lease recovery begins.
      5. Block recovery in datanodes finishes.
      6. Client #1 comes back to close the file.
      7. Close succeeds since Client #1 still hold the lease (lease isn't revoked until close in soft recovery).
      8. Namenode tries to commitBlockSynchronization with error log below.
      9. Namenode and datanodes have different genstamp for this file, resulting in corrupted block.

      Fix:

      Check the state of the last block when completing the file. If it's under recovery, it means lease recovery started, but namenode hasn't commitBlockSynchronization yet.

      In this case, don't complete file.

       

      2020-02-22 22:47:04,698 INFO [IPC Server handler 32 on 8020] org.apache.hadoop.hdfs.server.namenode.FSNamesystem: commitBlockSynchronization(oldBlock=BP-269461681-10.65.230.22-1554624547020:blk_2642650669_3063725879, newgenerationstamp=3063765480, newlength=262144000, newtargets=[25.65.180.47:10010, 25.65.161.162:10010, 100.101.88.162:10010], closeFile=true, deleteBlock=false)
      
      2020-02-22 22:47:04,698 DEBUG [IPC Server handler 32 on 8020] org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Unexpected block (=BP-269461681-10.65.230.22-1554624547020:blk_2642650669_3063725879) since the file (=132269111992796228.data.637180347427616457.tmp.132269136349107823.copying) is not under construction
      

       

      Attachments

        1. HDFS-15209.000.patch
          9 kB
          Ye Ni
        2. HDFS-15209.001.patch
          9 kB
          Ye Ni

        Issue Links

          Activity

            People

              NickyYe Ye Ni
              NickyYe Ye Ni
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: