Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-1026

Quota checks fail for small files and quotas

    Details

    • Type: Bug Bug
    • Status: Patch Available
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 0.20.1, 0.20.2, 0.20.3, 0.21.0, 0.22.0
    • Fix Version/s: None
    • Component/s: documentation, namenode
    • Labels:
    • Release Note:
      Quota check fails for small files and quotas are now being logged.

      Description

      If a directory has a quota less than blockSize * numReplicas then you can't add a file to it, even if the file size is less than the quota. This is because FSDirectory#addBlock updates the count assuming at least one block is written in full. We don't know how much of the block will be written when addBlock is called and supporting such small quotas is not important so perhaps we should document this and log an error message instead of making small (blockSize * numReplicas) quotas work.

      // check quota limits and updated space consumed
      updateCount(inodes, inodes.length-1, 0, fileINode.getPreferredBlockSize()*fileINode.getReplication(), true);
      

      You can reproduce with the following commands:

      $ dd if=/dev/zero of=temp bs=1000 count=64
      $ hadoop fs -mkdir /user/eli/dir
      $ hdfs dfsadmin -setSpaceQuota 191M /user/eli/dir
      $ hadoop fs -put temp /user/eli/dir  # Causes DSQuotaExceededException
      
      1. HDFS-1026.pacth
        1 kB
        Hızır Sefa İrken

        Issue Links

          Activity

          No work has yet been logged on this issue.

            People

            • Assignee:
              Unassigned
              Reporter:
              Eli Collins
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:

                Development