Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-74

dfs.du.reserved not honored in 0.15/16 (regression from 0.14+patch for 2549)

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Won't Fix
    • None
    • None
    • None
    • None

    Description

      changes for https://issues.apache.org/jira/browse/HADOOP-1463

      have caused a regression. earlier:

      • we could set dfs.du.reserve to 1G and be sure that 1G would not be used.

      now this is no longer true. I am quoting Pete Wyckoff's example:

      <example>
      Let's look at an example. 100 GB disk and /usr using 45 GB and dfs using 50 GBs now

      Df -kh shows:

      Capacity = 100 GB
      Available = 1 GB (remember ~4 GB chopped out for metadata and stuff)
      Used = 95 GBs

      remaining = 100 GB - 50 GB - 1GB = 49 GB

      Min(remaining, available) = 1 GB

      98% of which is usable for DFS apparently -

      So, we're at the limit, but are free to use 98% of the remaining 1GB.
      </example>

      this is broke. based on the discussion on 1463 - it seems like the notion of 'capacity' as being the first field of 'df' is problematic. For example - here's what our df output looks like:

      Filesystem Size Used Avail Use% Mounted on
      /dev/sda3 130G 123G 49M 100% /

      as u can see - 'Size' is a misnomer - that much space is not available. Rather the actual usable space is 123G+49M ~ 123G. (not entirely sure what the discrepancy is due to - but have heard this may be due to space reserved for file system metadata). Because of this discrepancy - we end up in a situation where file system is out of space.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              jsensarma Joydeep Sen Sarma
              Votes:
              1 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: