In Hadoop 2, when a file is opened for write in encryption zone, taken a snapshot and appended, the read out file size in the snapshot is larger than the listing size. This happens even when immutable snapshot
HDFS-11402 is enabled.
Note: The refactor
HDFS-8905 happened in Hadoop 3.0 and later fixed the bug silently (probably incidentally). Hadoop 2.x are still suffering from this issue.
Thanks Stephen O'Donnell for locating the root cause in the codebase.
1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, start HDFS cluster
2. Create an empty directory /dataenc, create encryption zone and allow snapshot on it
3. Use a client that keeps a file open for write under /dataenc. For example, I'm using Flume HDFS sink to tail a local file.
4. Append the file several times using the client, keep the file open.
5. Create a snapshot
6. Append the file one or more times, but don't let the file size exceed the block size limit. Wait for several seconds for the append to be flushed to DN.
7. Do a -ls on the file inside the snapshot, then try to read the file using -get, you should see the actual file size read is larger than the listing size from -ls.
The patch and an updated unit test will be uploaded later.