Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
-
None
Description
Currently we have some scale tests for openstack and s3a. For now we'll just trust HDFS to handle files >5GB and delete thousands of file in a directory properly.
We should abstract out the scale tests so it can be applied to all FileSystems.
A few things to consider for scale tests:
scale tests rely on the tester having good/stable upload bandwidth, might need large disk space. It needs to be configurable or optional.
scale tests might need long time to finish, consider have test timeout configurable if possible
Attachments
Issue Links
- is related to
-
HADOOP-10714 AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
- Closed