Description
the batch throttling may fail too fast
if there's batch update of 25 writes but the default retry count is nine attempts, only nine writes of the batch may be attempted...even if each attempt is actually successfully writing data.
In contrast, a single write of a piece of data gets the same no. of attempts, so 25 individual writes can handle a lot more throttling than a bulk write.
Proposed: retry logic to be more forgiving of batch writes, such as not consider a batch call where at least one data item was written to count as a failure
Attachments
Issue Links
- relates to
-
HADOOP-15833 Intermittent failures of some S3A tests with S3Guard in parallel test runs
- Resolved