Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.4.1
Description
I am using spark dataframe api for complex calculations. When I need to use the grouping sets function, I can only convert the expression to sql via analyzedPlan and then splice these sql into a complex sql to execute. In some cases, this operation generates an extremely complex sql. executing this complex sql, antlr4 continues to consume a large amount of memory, similar to a memory leak scenario. If you can and rollup, cube function through the dataframe api to calculate these operations will be much simpler.
Attachments
Issue Links
- links to