Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
0.14.0
-
None
-
None
Description
NNBench runs with a small block size (say 20). But uses default value of 512 for io.bytes.per.checksum. Since HADOOP-1134, block size should be a multiple of of bytes.per.checksum. Fix is to set bytes.per.checksum to same as blocksize.
I think following changes to NNBench would help in general (at least the first one) :
- NNBench does not log these exceptions. I think it should.
- It calls create() in an infinite loop as long as create does not succeed. May be we should have an upper limit. Say, max 10000.