Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
2.4.0, 2.5.2
-
Reviewed
Description
When setting dfs.namenode.fs-limits.max-directory-items to 0 in hdfs-site.xml, the error "java.lang.IllegalArgumentException: Cannot set dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater than 6400000" is produced. However, the documentation shows that 0 is a valid setting for dfs.namenode.fs-limits.max-directory-items, turning the check off.
Looking into the code in hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java shows that the culprit is
Preconditions.checkArgument(maxDirItems > 0 && maxDirItems <= MAX_DIR_ITEMS, "Cannot set "+ DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+ " to a value less than 0 or greater than " + MAX_DIR_ITEMS);
This checks if maxDirItems is greater than 0. Since 0 is not greater than 0, it produces an error.
Attachments
Attachments
Issue Links
- relates to
-
HDFS-6102 Lower the default maximum items per directory to fix PB fsimage loading
- Closed