Details
-
Test
-
Status: Open
-
Major
-
Resolution: Unresolved
-
1.5.0
-
None
-
None
-
4 nodes cluster, 32 cores each
Description
The following tests are running out of memory:
framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q174.q framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q171.q framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q168_DRILL-2046.q framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q162_DRILL-1985.q framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q165.q framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q177_DRILL-2046.q framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q159_DRILL-2046.q framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/large/q157_DRILL-1985.q framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/large/q175_DRILL-1985.q framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q160_DRILL-1985.q framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q163_DRILL-2046.q
With errors similar to the following:
java.sql.SQLException: SYSTEM ERROR: DrillRuntimeException: Failed to pre-allocate memory for SV. Existing recordCount*4 = 0, incoming batch recordCount*4 = 696
Unable to allocate sv2 for 1000 records, and not enough batchGroups to spill.
Those queries operate on wide tables and the sort limit is too low when using the default value for planner.memory.max_query_memory_per_node.
We should update those tests to set a higher limit (4GB worked well for me) to planner.memory.max_query_memory_per_node