Details
-
Sub-task
-
Status: Resolved
-
Minor
-
Resolution: Duplicate
-
2.8.4
-
None
-
Patch
-
Description
Hadoop distcp implementation doesn’t have properties to override Storage class while transferring data to Amazon S3 storage. Hadoop distcp doesn’t set any storage class while transferring data to Amazon S3 storage. Due to this all the objects moved from cluster to S3 using Hadoop Distcp are been stored in the default storage class “STANDARD”. By providing a new feature to override the default S3 storage class through configuration properties will be helpful to upload objects in other storage classes. I have come up with a design to implement this feature in a design document and uploaded the same in the JIRA. Kindly review and let me know for your suggestions.
Attachments
Attachments
Issue Links
- depends upon
-
HADOOP-18339 S3A storage class option only picked up when buffering writes to disk
- Resolved
- duplicates
-
HADOOP-12020 Support configuration of different S3 storage classes
- Resolved
- relates to
-
HADOOP-14837 Handle S3A "glacier" data
- Open
-
HADOOP-12020 Support configuration of different S3 storage classes
- Resolved
-
HADOOP-17851 S3A to support user-specified content encoding
- Resolved