Description
managed to create on a parallel test run
org.apache.hadoop.fs.s3a.AWSServiceThrottledException: delete on s3a://hwdev-steve-ireland-new/fork-0005/test/existing-dir/existing-file: com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG): The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException; Request ID: RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG)
at
We should be able to handle this. 400 "bad things happened" error though, not the 503 from S3.
We need a retry handler for DDB throttle operations
Attachments
Attachments
Issue Links
- is related to
-
HADOOP-15583 Stabilize S3A Assumed Role support
- Resolved
-
HADOOP-15604 Bulk commits of S3A MPUs place needless excessive load on S3 & S3Guard
- Resolved
-
HADOOP-17180 S3Guard: Include 500 DynamoDB system errors in exponential backoff retries
- Open
- relates to
-
HADOOP-13761 S3Guard: implement retries for DDB failures and throttling; translate exceptions
- Resolved
-
HADOOP-15349 S3Guard DDB retryBackoff to be more informative on limits exceeded
- Resolved
- supercedes
-
HADOOP-15022 s3guard IT tests increase R/W capacity of the test table by 1
- Resolved