With the addition of a major new feature to filesystems, the filesystem specification in hadoop-common/site is now out of sync.
This means that
- there's no strict specification of what it should do
- you can't derive tests from that specification
- other people trying to implement the API will have to infer what to do from the HDFS source
- there's no way to decide whether or not the HDFS implementation does what it is intended.
- without matching tests against the raw local FS, differences between the HDFS impl and the Posix standard one won't be caught until it is potentially too late to fix.
The operation should be relatively easy to define (after a truncate, the files bytes [0...len-1] must equal the original bytes, length(file)==len, etc)
The truncate tests already written could then be pulled up into contract tests which any filesystem implementation can run against.