Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
Description
While reviewing HDFS-13671, I found "dfsadmin -listOpenFiles -blockingDecommission" can drop some files.
https://github.com/apache/hadoop/pull/3065#discussion_r647396463
If the DataNodes have the following open files and we want to list all the open files:
DN1: [1001, 1002, 1003, ... , 2000]
DN2: [1, 2, 3, ... , 1000]At first getFilesBlockingDecom(0, "/") is called and it returns [1001, 1002, ... , 2000] because it reached max size (=1000), and next getFilesBlockingDecom(2000, "/") is called because the last inode Id of the previous result is 2000. That way the open files of DN2 is missed
Attachments
Attachments
Issue Links
- relates to
-
HDFS-13671 Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
-
- Resolved
-
-
HDFS-11847 Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning
-
- Resolved
-