Description
When there are ~50 files being committed; each in their own thread from the commit pool; probably the DDB repo is being overloaded just from one single process doing task commit. We should be backing off more, especially given that failing on a write could potentially leave the store inconsistent with the FS (renames, etc)
It would be nice to have some tests to prove that the I/O thresholds are the reason for unprocessed items in DynamoDB metadata store
Attachments
Issue Links
- Is contained by
-
HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename
-
- Resolved
-
- is depended upon by
-
HADOOP-14936 S3Guard: remove "experimental" from documentation
-
- Resolved
-
- is related to
-
HADOOP-15349 S3Guard DDB retryBackoff to be more informative on limits exceeded
-
- Resolved
-
- relates to
-
HADOOP-15426 Make S3guard client resilient to DDB throttle events and network failures
-
- Resolved
-
-
HADOOP-15576 S3A Multipart Uploader to work with S3Guard and encryption
-
- Resolved
-
-
HADOOP-16118 S3Guard to support on-demand DDB tables
-
- Resolved
-