Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-8502

Quota accounting should be calculated based on actual size rather than block size

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Not A Problem
    • None
    • None
    • None
    • None

    Description

      When calculating quotas, the block size is used rather than the actual size of the file. This limits the granularity of quota enforcement to increments of the block size which is wasteful and limits the usefulness (i.e. it's possible to violate the quota in a way that's not at all intuitive.

      [esammer@xxx ~]$ hadoop fs -count -q /user/esammer/quota-test
              none             inf         1048576         1048576            1            2                  0 hdfs://xxx/user/esammer/quota-test
      [esammer@xxx ~]$ du /etc/passwd
      4       /etc/passwd
      esammer@xxx ~]$ hadoop fs -put /etc/passwd /user/esammer/quota-test/
      12/06/09 13:56:16 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: org.apache.hadoop.hdf
      s.protocol.DSQuotaExceededException: The DiskSpace quota of /user/esammer/quota-test is exceeded: quota=1048576 diskspace consumed=384.0m
      ...
      

      Obviously the file in question would only occupy 12KB, not 384MB, and should easily fit within the 1MB quota.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              esammer Eric Sammer
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: