Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
Description
Reported by rkins
Based on the memory computations drill thinks that there is not sufficient memory and falls back to a single partition case. The single partition case however does not respect the memory constraints imposed and completes the query using ~130MB of memory
alter session set `planner.width.max_per_node` = 1; alter session set `planner.memory.max_query_memory_per_node` = 117127360; select count(*) from (select max(nulls_col), max(length(nulls_col)), max(`filename`) from dfs.`/drill/testdata/hash-agg/data1` group by no_nulls_col) d;
Based on analysis by ben-zvi this is by design. When the Hash Aggr Op finds that there is not enough memory for at least two partitions, it falls back to the pre 1.11 behavior ( using 10GB limit ).
Solution is to provide a configuration based on which the fallback will be either allowed or query will be failed.
Attachments
Issue Links
- links to