Details
-
Bug
-
Status: Resolved
-
P2
-
Resolution: Fixed
-
2.12.0
-
None
Description
After upgrading from Beam 2.8.0 to 2.12.0 we see a huge number of tasks per stage in our pipelines. Where we used to see a few thousands tasks/stage at most, it's now into the millions. This makes the pipeline unable to complete successfully (driver and network are overloaded)
It looks like after each (Co)GroupByKey operation, the amount of tasks (per stage) at least doubles sometimes even more.
I did notice a fix to GroupByKey (BEAM-5392) that may or may not be related.
I cannot post the full pipeline, but we have created a small test to showcase the effect:
https://github.com/pbackx/beam-groupbykey-test
https://github.com/pbackx/beam-groupbykey-test/blob/master/src/test/java/NumTaskTest.java contains two tests:
- One shows how we would usually join PCollections together and if you run it, you'll see the number of tasks gradually increase
- The other uses a GroupIntoBatches operation after each join. The effect is that there's no longer an increase in tasks. (the deprecated Reshuffle operation has a similar effect, but it's deprecated...)
We've now sprinkled GroupIntoBatches throughout our pipeline and this seems to avoid the issue, but at the cost of performance (this effect is much worse in the toy example than in our "real" pipeline to be honest).
My questions:
- Is this a bug or is this expected behavior?
- Is the GroupIntoBatches the best workaround or are there better options?