Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-5463

NameNode should limit the number of blocks per file

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Duplicate
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      Currently there is no limit to number of blocks user can write to a file.

      And blocksize also can be set to minimum possible.

      User can write any number of blocks continously, which may create problems in NameNodes performance and service as the number of blocks of file increases.
      Because each time new block allocated, all blocks of the file will be persisted, and this can cause serious performance degradation

      So proposal is to limit the number of maximum blocks a user can write to a file.

      May be 1024 blocks(if 128*MB is block size then 128 GB can be max file size)

        Issue Links

          Activity

          Hide
          Andrew Wang added a comment -

          As Uma said above, I think this is handled as of 2.1.0 by HDFS-4305. Please re-open if you feel this is incorrect. Thanks Vinay.

          Show
          Andrew Wang added a comment - As Uma said above, I think this is handled as of 2.1.0 by HDFS-4305 . Please re-open if you feel this is incorrect. Thanks Vinay.
          Hide
          Uma Maheswara Rao G added a comment -

          Hi Vinay,

          I think this is already addressed. Please see this parameter:
          public static final String DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY = "dfs.namenode.fs-limits.max-blocks-per-file";
          public static final long DFS_NAMENODE_MAX_BLOCKS_PER_FILE_DEFAULT = 1024*1024;

           if (pendingFile.getBlocks().length >= maxBlocksPerFile) {
                  throw new IOException("File has reached the limit on maximum number of"
                      + " blocks (" + DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY
                      + "): " + pendingFile.getBlocks().length + " >= "
                      + maxBlocksPerFile);
                }
          

          Addressed as part of HDFS-4305.

          Show
          Uma Maheswara Rao G added a comment - Hi Vinay, I think this is already addressed. Please see this parameter: public static final String DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY = "dfs.namenode.fs-limits.max-blocks-per-file"; public static final long DFS_NAMENODE_MAX_BLOCKS_PER_FILE_DEFAULT = 1024*1024; if (pendingFile.getBlocks().length >= maxBlocksPerFile) { throw new IOException( "File has reached the limit on maximum number of" + " blocks (" + DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY + "): " + pendingFile.getBlocks().length + " >= " + maxBlocksPerFile); } Addressed as part of HDFS-4305 .

            People

            • Assignee:
              Vinayakumar B
              Reporter:
              Vinayakumar B
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development