The underlying problem is filter selectivity under-estimate for a query with complicated predicates e.g. deeply nested and/or predicates. This leads to under parallelization of the major fragment doing the join.
To really resolve this problem we need table/column statistics to correctly estimate the selectivity. However, in the absence of statistics OR even when existing statistics are insufficient to get a correct estimate of selectivity this will serve as a workaround.
For now, the fix is to provide options for controlling the lower and upper bounds for filter selectivity. The user can use the following options. The selectivity can be varied between 0 and 1 with min selectivity always less than or equal to max selectivity.
When using 'explain plan including all attributes for ' it should cap the estimated ROWCOUNT based on these options. Estimated ROWCOUNT of operators downstream is not directly controlled by these options. However, they may change as a result of dependency between different operators. The FILTER operator only operates on the input of its immediate upstream operator (e.g. SCAN, AGG). If two different filters are present in the same plan, they might have different selectivities based on their immediate upstream operators ROWCOUNT.