The AWS Batch Operator attempts to use a boto3 feature that is not available and has not been merged in years, see
- see also https://github.com/broadinstitute/cromwell/issues/4303
This is a curious case of premature optimization. So, in the meantime, this means that the fallback is the exponential backoff routine for the status checks on the batch job. Unfortunately, when the concurrency of Airflow jobs is very high (100's of tasks), this fallback polling hits the AWS Batch API too hard and the AWS API throttle throws an error, which fails the Airflow task, simply because the status is polled too frequently.
Check the output from the retry algorithm, e.g. within the first 10 retries, the status of an AWS batch job is checked about 10 times at a rate that is approx 1 retry/sec. When an Airflow instance is running 10's or 100's of concurrent batch jobs, this hits the API too frequently and crashes the Airflow task (plus it occupies a worker in too much busy work).
Possible solutions are to introduce an initial sleep (say 60 sec?) right after issuing the request, so that the batch job has some time to spin up. The job progresses through a through phases before it gets to RUNNING state and polling for each phase of that sequence might help. Since batch jobs tend to be long-running jobs (rather than near-real time jobs), it might help to issue less frequent polls when it's in the RUNNING state. Something on the order of 10's seconds might be reasonable for batch jobs? Maybe the class could expose a parameter for the rate of polling (or a callable)?
Another option is to use something like the sensor-poke approach, with rescheduling, e.g.