IMPALA-4397,IMPALA-3259: reduce codegen time and memory
A handful of fixes to codegen memory usage:
- Delete the IR module when we're done with it (it can be fairly large)
- Track the compiled code size (typically not that large, but it can add
up if there are many fragments).
- Estimate optimisation memory requirements and track it in the memory
tracker. This is very crude but much better than not tracking it.
A handful of fixes to improve codegen time/cost, particularly targeted
at compute stats workloads:
- Avoid over-inlining when there are many aggregate functions,
conjuncts, etc by adding "NoInline" attributes.
- Don't codegen non-grouping merge aggregations. They will only process
one row per Impala daemon, so codegen is not worth it.
- Make the Hll algorithm more efficient by specialising the hash function
based on decimal width.
- This doesn't tackle over-inlining of large expr trees, but a similar
approach will be used there in a follow-on patch.
Compute stats on functional_parquet.widetable_1000_cols goes from 1min+
of codegen to ~ 5s codegen on my machine. Local perf runs of tpc-h
and targeted perf showed no regressions and some moderate improvements
Also did an experiment to understand the perf consequences of disabling
inlining. I manually set CODEGEN_INLINE_EXPRS_THRESHOLD to 0, and ran:
drop stats tpch_20_parquet.lineitem
compute stats tpch_20_parquet.lineitem;
There was no difference in time spent in the agg node: 30.7s with
inlining, 30.5s without.
Reviewed-by: Tim Armstrong <email@example.com>
Reviewed-by: Marcel Kornacker <firstname.lastname@example.org>
Tested-by: Internal Jenkins