Details
-
Bug
-
Status: Triage Needed
-
P2
-
Resolution: Fixed
-
2.21.0, 2.24.0, 2.25.0, 2.28.0
-
None
Description
When multiple load jobs are needed to write data to a destination table, e.g., when the data is spread over more than 10,000 URIs, WriteToBigQuery in FILE_LOADS mode will write data into temporary tables and then copy the temporary tables into the destination table.
When WriteToBigQuery is used with write_disposition=BigQueryDisposition.WRITE_APPEND and additional_bq_parameters={"schemaUpdateOptions": ["ALLOW_FIELD_ADDITION"]}, the schema update options are not respected by the jobs that copy data from temporary tables into the destination table. The effect is that for small jobs (<10K source URIs), schema field addition is allowed, however, if the job is scaled to >10K source URIs, then schema field addition will fail with an error such as:
Provided Schema does not match Table project:dataset.table. Cannot add fields (field: field_name)
I've been able to reproduce this issue with Python 3.7 and DataflowRunner on Beam 2.21.0 and Beam 2.25.0. I could not reproduce the issue with DirectRunner. A minimal reproducible example is attached.