Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
1.5.0
-
None
-
None
Description
I have executed 5 tests from Advanced/mondrian test suite in a loop overnight.
My observation is that direct memory steadily grew from 117MB to 1.8GB and remained on that level for 14875 iteration of the tests.
My question is: why do 5 queries that were able to execute with 117MB of memory require 1.8GB of memory after 5 hours of execution ?
Attached:
- Memory used after each test iteration : memComsumption.txt
- Log of the framework run: drill.log.2016-01-12-16
- Tests: test.tar
Setup:
Single node 32 core box. DRILL_MAX_DIRECT_MEMORY="4G" DRILL_HEAP="1G" 0: jdbc:drill:schema=dfs> select * from sys.options where status like '%CHANGED%'; +-----------------------------------+----------+---------+----------+----------+-------------+-----------+------------+ | name | kind | type | status | num_val | string_val | bool_val | float_val | +-----------------------------------+----------+---------+----------+----------+-------------+-----------+------------+ | planner.enable_decimal_data_type | BOOLEAN | SYSTEM | CHANGED | null | null | true | null | +-----------------------------------+----------+---------+----------+----------+-------------+-----------+------------+ 1 row selected (1.309 seconds)
Reproduction: * tar xvf test.tar into Functional/test directory * ./run.sh -s Functional/test -g regression -t 180 -n 5 -i 10000000 -m
This is very similar behavior as Hakim and I observed long time ago with window functions. Now, that new allocator is in place we rerun this test and we see the similar things, and allocator does not seem to think that we have a memory leak. Hence the speculation that memory is leaked in RPC layer.
I'm going to reduce planner.width.max_per_node and see if it has any effect on memory allocation (speculating again ...)