Details
Description
Suppose we have two files in two locations (may be two clusters) and these two files have the same size. How could we tell whether the content of them are the same?
Currently, the only way is to read both files and compare the content of them. This is a very expensive operation if the files are huge.
So, we would like to extend the FileSystem API to support returning file-checksums/file-digests.
Attachments
Attachments
Issue Links
- blocks
-
HADOOP-3981 Need a distributed file checksum algorithm for HDFS
- Closed