Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-11917

Why when using the hdfs nfs gateway, a file which is smaller than one block size required a block

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Not A Problem
    • 2.8.0
    • None
    • nfs
    • None

    Description

      I use the linux shell to put the file into the hdfs throuth the hdfs nfs gateway. I found that if the file which size is smaller than one block(128M), it will still takes one block(128M) of hdfs storage by this way. But after a few minitues the excess storage will be released.
      e.g:If I put the file(60M) into the hdfs throuth the hdfs nfs gateway, it will takes one block(128M) at first. After a few minitues the excess storage(68M) will
      be released. The file only use 60M hdfs storage at last.
      Why is will be this?

      Attachments

        Activity

          People

            cheersyang Weiwei Yang
            fireling BINGHUI WANG
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: