Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-7473

Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.4.0, 2.5.2
    • Fix Version/s: 2.7.0
    • Component/s: documentation
    • Labels:
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      When setting dfs.namenode.fs-limits.max-directory-items to 0 in hdfs-site.xml, the error "java.lang.IllegalArgumentException: Cannot set dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater than 6400000" is produced. However, the documentation shows that 0 is a valid setting for dfs.namenode.fs-limits.max-directory-items, turning the check off.

      Looking into the code in hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java shows that the culprit is

      Preconditions.checkArgument(maxDirItems > 0 && maxDirItems <= MAX_DIR_ITEMS, "Cannot set "+ DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+ " to a value less than 0 or greater than " + MAX_DIR_ITEMS);

      This checks if maxDirItems is greater than 0. Since 0 is not greater than 0, it produces an error.

        Attachments

        1. HDFS-7473-001.patch
          2 kB
          Akira Ajisaka

          Issue Links

            Activity

              People

              • Assignee:
                aajisaka Akira Ajisaka
                Reporter:
                keller-jason Jason Keller
              • Votes:
                0 Vote for this issue
                Watchers:
                6 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: