It's possible for the tarball uploaded to GCS to be quite large. An example is a user vendoring multiple dependencies in their tarball so as to achieve a more stable deployable artifact.
Before this change the GCS upload api call executed a multipart upload, which Google [documentation](https://cloud.google.com/storage/docs/json_api/v1/how-tos/upload) states should be used when the file is small enough to upload again when the connection fails. For large tarballs, we will hit 60 second socket timeouts before completing the multipart upload. By passing `total_size`, apitools first checks if the size exceeds the resumable upload threshold, and executes the more robust resumable upload rather than a multipart, avoiding