Details
Description
In Hadoop 2, when a file is opened for write in encryption zone, taken a snapshot and appended, the read out file size in the snapshot is larger than the listing size. This happens even when immutable snapshot HDFS-11402 is enabled.
Note: The refactor HDFS-8905 happened in Hadoop 3.0 and later fixed the bug silently (probably incidentally). Hadoop 2.x are still suffering from this issue.
Thanks sodonnell for locating the root cause in the codebase.
Repro:
1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, start HDFS cluster
2. Create an empty directory /dataenc, create encryption zone and allow snapshot on it
hadoop key create reprokey sudo -u hdfs hdfs dfs -mkdir /dataenc sudo -u hdfs hdfs crypto -createZone -keyName reprokey -path /dataenc sudo -u hdfs hdfs dfsadmin -allowSnapshot /dataenc
3. Use a client that keeps a file open for write under /dataenc. For example, I'm using Flume HDFS sink to tail a local file.
4. Append the file several times using the client, keep the file open.
5. Create a snapshot
sudo -u hdfs hdfs dfs -createSnapshot /dataenc snap1
6. Append the file one or more times, but don't let the file size exceed the block size limit. Wait for several seconds for the append to be flushed to DN.
7. Do a -ls on the file inside the snapshot, then try to read the file using -get, you should see the actual file size read is larger than the listing size from -ls.
The patch and an updated unit test will be uploaded later.
Attachments
Attachments
Issue Links
- is caused by
-
HDFS-11402 HDFS Snapshots should capture point-in-time copies of OPEN files
- Resolved
- is related to
-
HDFS-8905 Refactor DFSInputStream#ReaderStrategy
- Resolved