Details
Description
Using the FILE_LOADS method in WriteToBigQuery, it initially appears to work, sending load jobs, which then (at least sometimes) succeed and the data goes into the correct tables.
But the temporary tables that were created never get deleted. Often the data was just never even copied from the temp tables to the destination.
...it appears that after the load jobs, beam should wait for them to finish, then copy the data from the temp tables and delete them; however, it seems that when used with a streaming pipeline, it doesn't complete these steps.
In case it's not clear, this is for the python SDK.