Details
-
Improvement
-
Status: Resolved
-
Minor
-
Resolution: Duplicate
-
None
-
None
-
None
Description
When I have har file with 1000 files in it,
% hadoop dfs -lsr har:///user/knoguchi/myhar.har/
would open/read/close the _index/_masterindex files 1000 times.
This makes the client slow and add some load to the namenode as well.
Any ways to reduce this number?
Attachments
Attachments
Issue Links
- duplicates
-
MAPREDUCE-2459 Cache HAR filesystem metadata
- Closed