Description
The test reports that the file takes an extra 4k on disk:
Testcase: testDU took 5.74 sec FAILED expected:<32768> but was:<36864> junit.framework.AssertionFailedError: expected:<32768> but was:<36864> at org.apache.hadoop.fs.TestDU.testDU(TestDU.java:79)
This is because du reports 32k for the file and 4k because the file system it lives on uses extended attributes.
common-branch-0.20 $ dd if=/dev/zero of=data bs=4096 count=8 8+0 records in 8+0 records out 32768 bytes (33 kB) copied, 9.6e-05 seconds, 341 MB/s common-branch-0.20 $ du data 36 data common-branch-0.20 $ du --apparent-size data 32 data
We should modify the test to allow for some extra on-disk slack. The on-disk usage could also be smaller if the file data is all zeros or compression is enabled. The test currently handles the former by writing random data, we're punting on the latter.
Attachments
Attachments
Issue Links
- is related to
-
HADOOP-7473 TestDU is too sensitive to underlying filesystem
- Closed