Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-11402

HDFS Snapshots should capture point-in-time copies of OPEN files

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.6.0
    • 2.9.0, 3.0.0-alpha4
    • hdfs
    • None
    • Incompatible change, Reviewed
    • Hide
      When the config param "dfs.namenode.snapshot.capture.openfiles" is enabled, HDFS snapshots taken will additionally capture point-in-time copies of the open files that have valid leases. Even when the current version open files grow or shrink in size, the snapshot will always retain the immutable versions of these open files, just as in for all other closed files. Note: The file length captured for open files in the snapshot was the one recorded in NameNode at the time of snapshot and it may be shorter than what the client has written till then. In order to capture the latest length, the client can call hflush/hsync with the flag SyncFlag.UPDATE_LENGTH on the open files handles.
      Show
      When the config param "dfs.namenode.snapshot.capture.openfiles" is enabled, HDFS snapshots taken will additionally capture point-in-time copies of the open files that have valid leases. Even when the current version open files grow or shrink in size, the snapshot will always retain the immutable versions of these open files, just as in for all other closed files. Note: The file length captured for open files in the snapshot was the one recorded in NameNode at the time of snapshot and it may be shorter than what the client has written till then. In order to capture the latest length, the client can call hflush/hsync with the flag SyncFlag.UPDATE_LENGTH on the open files handles.

    Description

      Problem:

      1. When there are files being written and when HDFS Snapshots are taken in parallel, Snapshots do capture all these files, but these being written files in Snapshots do not have the point-in-time file length captured. That is, these open files are not frozen in HDFS Snapshots. These open files grow/shrink in length, just like the original file, even after the snapshot time.

      2. At the time of File close or any other meta data modification operation on these files, HDFS reconciles the file length and records the modification in the last taken Snapshot. All the previously taken Snapshots continue to have those open Files with no modification recorded. So, all those previous snapshots end up using the final modification record in the last snapshot. Thus after the file close, file lengths in all those snapshots will end up same.

      Assume File1 is opened for write and a total of 1MB written to it. While the writes are happening, snapshots are taken in parallel.

      |---Time---T1-----------T2-------------T3----------------T4------>
      |-----------------------Snap1----------Snap2-------------Snap3--->
      |---File1.open---write---------write-----------close------------->
      

      Then at time,
      T2:
      Snap1.File1.length = 0

      T3:
      Snap1.File1.length = 0
      Snap2.File1.length = 0

      <File1 write completed and closed>

      T4:
      Snap1.File1.length = 1MB
      Snap2.File1.length = 1MB
      Snap3.File1.length = 1MB

      Proposal

      1. At the time of taking Snapshot, SnapshotManager#createSnapshot can optionally request DirectorySnapshottableFeature#addSnapshot to freeze open files.

      2. DirectorySnapshottableFeature#addSnapshot can consult with LeaseManager and get a list INodesInPath for all open files under the snapshot dir.

      3. DirectorySnapshottableFeature#addSnapshot after the Snapshot creation, Diff creation and updating modification time, can invoke INodeFile#recordModification for each of the open files. This way, the Snapshot just taken will have a FileDiff with fileSize captured for each of the open files.

      4. Above model follows the current Snapshot and Diff protocols and doesn't introduce any any disk formats. So, I don't think we will be needing any new FSImage Loader/Saver changes for Snapshots.

      5. One of the design goals of HDFS Snapshot was ability to take any number of snapshots in O(1) time. LeaseManager though has all the open files with leases in-memory map, an iteration is still needed to prune the needed open files and then run recordModification on each of them. So, it will not be a strict O(1) with the above proposal. But, its going be a marginal increase only as the new order will be of O(open_files_under_snap_dir). In order to avoid HDFS Snapshots change in behavior for open files and avoid change in time complexity, this improvement can be made under a new config "dfs.namenode.snapshot.freeze.openfiles" which by default can be false.

      Attachments

        1. HDFS-11402-branch-2.01.patch
          67 kB
          Manoj Govindassamy
        2. HDFS-11402.08.patch
          68 kB
          Manoj Govindassamy
        3. HDFS-11402.07.patch
          68 kB
          Manoj Govindassamy
        4. HDFS-11402.06.patch
          68 kB
          Manoj Govindassamy
        5. HDFS-11402.05.patch
          67 kB
          Manoj Govindassamy
        6. HDFS-11402.04.patch
          67 kB
          Manoj Govindassamy
        7. HDFS-11402.03.patch
          62 kB
          Manoj Govindassamy
        8. HDFS-11402.02.patch
          52 kB
          Manoj Govindassamy
        9. HDFS-11402.01.patch
          44 kB
          Manoj Govindassamy

        Issue Links

          Activity

            People

              manojg Manoj Govindassamy
              manojg Manoj Govindassamy
              Votes:
              0 Vote for this issue
              Watchers:
              22 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: