The smoke test case for HDFS, TestHDFSQuota:testInputValues, it asserts fail on setSpaceQuota to 0:
While this doesn't match Hadoop's definition for SpaceQuota.
In Hadoop's doc (2.7.2 / 2.8.4):
Set the space quota to be N bytes for each directory. This is a hard limit on total size of all the files under the directory tree. The space quota takes replication also into account, i.e. one GB of data with replication of 3 consumes 3GB of quota. N can also be specified with a binary prefix for convenience, for e.g. 50g for 50 gigabytes and 2t for 2 terabytes etc. Best effort for each directory, with faults reported if N is , the directory does not exist or it is a file, or the directory would immediately exceed the new quota.
This produces false failure, and should be removed from testcases.