Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-14941

Potential editlog race condition can cause corrupted file



    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 3.3.0, 3.2.2, 2.10.1
    • namenode
    • Reviewed


      Recently we encountered an issue that, after a failover, NameNode complains corrupted file/missing blocks. The blocks did recover after full block reports, so the blocks are not actually missing. After further investigation, we believe this is what happened:

      First of all, on SbN, it is possible that it receives block reports before corresponding edit tailing happened. In which case SbN postpones processing the DN block report, handled by the guarding logic below:

            if (shouldPostponeBlocksFromFuture &&
                namesystem.isGenStampInFuture(iblk)) {
              queueReportedBlock(storageInfo, iblk, reportedState,

      Basically if reported block has a future generation stamp, the DN report gets requeued.

      However, in FSNamesystem#storeAllocatedBlock, we have the following code:

            // allocate new block, record block locations in INode.
            newBlock = createNewBlock();
            INodesInPath inodesInPath = INodesInPath.fromINode(pendingFile);
            saveAllocatedBlock(src, inodesInPath, newBlock, targets);
            persistNewBlock(src, pendingFile);
            offset = pendingFile.computeFileSize();

      The line
      newBlock = createNewBlock();
      Would log an edit entry OP_SET_GENSTAMP_V2 to bump generation stamp on Standby
      while the following line
      persistNewBlock(src, pendingFile);
      would log another edit entry OP_ADD_BLOCK to actually add the block on Standby.

      Then the race condition is that, imagine Standby has just processed OP_SET_GENSTAMP_V2, but not yet OP_ADD_BLOCK (if they just happen to be in different setment). Now a block report with new generation stamp comes in.

      Since the genstamp bump has already been processed, the reported block may not be considered as future block. So the guarding logic passes. But actually, the block hasn't been added to blockmap, because the second edit is yet to be tailed. So, the block then gets added to invalidate block list and we saw messages like:

      BLOCK* addBlock: block XXX on node XXX size XXX does not belong to any file

      Even worse, since this IBR is effectively lost, the NameNode has no information about this block, until the next full block report. So after a failover, the NN marks it as corrupt.

      This issue won't happen though, if both of the edit entries get tailed all together, so no IBR processing can happen in between. But in our case, we set edit tailing interval to super low (to allow Standby read), so when under high workload, there is a much much higher chance that the two entries are tailed separately, causing the issue.


        1. HDFS-14941.001.patch
          14 kB
          Konstantin Shvachko
        2. HDFS-14941.002.patch
          17 kB
          Chen Liang
        3. HDFS-14941.003.patch
          18 kB
          Chen Liang
        4. HDFS-14941.004.patch
          18 kB
          Chen Liang
        5. HDFS-14941.005.patch
          18 kB
          Chen Liang
        6. HDFS-14941.006.patch
          19 kB
          Chen Liang

        Issue Links



              vagarychen Chen Liang
              vagarychen Chen Liang
              0 Vote for this issue
              13 Start watching this issue