Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-7643

HFileArchiver.resolveAndArchive() race condition may lead to snapshot data loss

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: hbase-6055, 0.95.2
    • Fix Version/s: 0.94.5, 0.95.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      • The master have an hfile cleaner thread (that is responsible for cleaning the /hbase/.archive dir)
        • /hbase/.archive/table/region/family/hfile
        • if the family/region/family directory is empty the cleaner removes it
      • The master can archive files (from another thread, e.g. DeleteTableHandler)
      • The region can archive files (from another server/process, e.g. compaction)

      The simplified file archiving code looks like this:

      HFileArchiver.resolveAndArchive(...) {
        // ensure that the archive dir exists
        fs.mkdir(archiveDir);
      
        // move the file to the archiver
        success = fs.rename(originalPath/fileName, archiveDir/fileName)
      
        // if the rename is failed, delete the file without archiving
        if (!success) fs.delete(originalPath/fileName);
      }
      

      Since there's no synchronization between HFileArchiver.resolveAndArchive() and the cleaner run (different process, thread, ...) you can end up in the situation where you are moving something in a directory that doesn't exists.

      fs.mkdir(archiveDir);
      
      // HFileCleaner chore starts at this point
      // and the archiveDirectory that we just ensured to be present gets removed.
      
      // The rename at this point will fail since the parent directory is missing.
      success = fs.rename(originalPath/fileName, archiveDir/fileName)
      

      The bad thing of deleting the file without archiving is that if you've a snapshot that relies on the file to be present, or you've a clone table that relies on that file is that you're losing data.

      Possible solutions

      • Create a ZooKeeper lock, to notify the master ("Hey I'm archiving something, wait a bit")
      • Add a RS -> Master call to let the master removes files and avoid this kind of situations
      • Avoid to remove empty directories from the archive if the table exists or is not disabled
      • Add a try catch around the fs.rename

      The last one, the easiest one, looks like:

      for (int i = 0; i < retries; ++i) {
        // ensure archive directory to be present
        fs.mkdir(archiveDir);
      
        // ----> possible race <-----
      
        // try to archive file
        success = fs.rename(originalPath/fileName, archiveDir/fileName);
        if (success) break;
      }
      

        Attachments

        1. HBASE-7653-p4-v7.patch
          10 kB
          Matteo Bertozzi
        2. HBASE-7653-p4-v6.patch
          9 kB
          Matteo Bertozzi
        3. HBASE-7653-p4-v5.patch
          8 kB
          Matteo Bertozzi
        4. HBASE-7653-p4-v4.patch
          8 kB
          Matteo Bertozzi
        5. HBASE-7653-p4-v3.patch
          7 kB
          Matteo Bertozzi
        6. HBASE-7653-p4-v2.patch
          7 kB
          Matteo Bertozzi
        7. HBASE-7653-p4-v1.patch
          4 kB
          Matteo Bertozzi
        8. HBASE-7653-p4-v0.patch
          4 kB
          Matteo Bertozzi

          Activity

            People

            • Assignee:
              mbertozzi Matteo Bertozzi
              Reporter:
              mbertozzi Matteo Bertozzi
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: