Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-7943

Append cannot handle the last block with length greater than the preferred block size

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Blocker
    • Resolution: Fixed
    • 2.7.0
    • 2.7.0
    • None
    • None
    • Reviewed

    Description

      In HDFS-3689, we remove the restriction from concat that all the source files should have the same preferred block size with the target file. This can cause a file to contain blocks with size larger than its preferred block size.

      If such block happens to be the last block of a file, and later we append data to the file without the CreateFlag.NEW_BLOCK flag (i.e., appending data to the last block), looks like the current client code will keep writing to the last block and never allocate a new block.

      Attachments

        1. HDFS-7943.000.patch
          5 kB
          Jing Zhao
        2. HDFS-7943.001.patch
          6 kB
          Jing Zhao

        Issue Links

          Activity

            People

              jingzhao Jing Zhao
              jingzhao Jing Zhao
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: