Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-7943

Append cannot handle the last block with length greater than the preferred block size

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 2.7.0
    • Fix Version/s: 2.7.0
    • Component/s: None
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      In HDFS-3689, we remove the restriction from concat that all the source files should have the same preferred block size with the target file. This can cause a file to contain blocks with size larger than its preferred block size.

      If such block happens to be the last block of a file, and later we append data to the file without the CreateFlag.NEW_BLOCK flag (i.e., appending data to the last block), looks like the current client code will keep writing to the last block and never allocate a new block.

        Attachments

        1. HDFS-7943.000.patch
          5 kB
          Jing Zhao
        2. HDFS-7943.001.patch
          6 kB
          Jing Zhao

          Issue Links

            Activity

              People

              • Assignee:
                jingzhao Jing Zhao
                Reporter:
                jingzhao Jing Zhao
              • Votes:
                0 Vote for this issue
                Watchers:
                6 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: