Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
2.4.0
-
None
-
None
Description
Currently different code flows in hadoop use recursive listing to discover files/folders in a given path. For example in FileInputFormat (both mapreduce and mapred implementations) this is done while calculating splits. They however do this by doing listing level by level. That means to discover files in /foo/bar means they do listing at /foo/bar first to get the immediate children, then make the same call on all immediate children for /foo/bar to discover their immediate children and so on. This doesn't scale well for fs implementations like s3 because every listStatus call ends up being a webservice call to s3. In cases where large number of files are considered for input, this makes getSplits() call slow.
This patch adds a new set of recursive list apis that give opportunity to the s3 fs implementation to optimize. The behavior remains the same for other implementations (that is a default implementation is provided for other fs so they don't have to implement anything new). However for s3 it provides a simple change (as shown in the patch) to improve listing performance.
Attachments
Attachments
Issue Links
- is depended upon by
-
MAPREDUCE-5907 Improve getSplits() performance for fs implementations that can utilize performance gains from recursive listing
- Patch Available