Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-7473

Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 2.4.0, 2.5.2
    • 2.7.0
    • documentation
    • Reviewed

    Description

      When setting dfs.namenode.fs-limits.max-directory-items to 0 in hdfs-site.xml, the error "java.lang.IllegalArgumentException: Cannot set dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater than 6400000" is produced. However, the documentation shows that 0 is a valid setting for dfs.namenode.fs-limits.max-directory-items, turning the check off.

      Looking into the code in hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java shows that the culprit is

      Preconditions.checkArgument(maxDirItems > 0 && maxDirItems <= MAX_DIR_ITEMS, "Cannot set "+ DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+ " to a value less than 0 or greater than " + MAX_DIR_ITEMS);

      This checks if maxDirItems is greater than 0. Since 0 is not greater than 0, it produces an error.

      Attachments

        1. HDFS-7473-001.patch
          2 kB
          Akira Ajisaka

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            aajisaka Akira Ajisaka
            keller-jason Jason Keller
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Issue deployment