Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Won't Fix
-
Impala 2.9.0
-
None
-
ghx-label-4
Description
Q12 of TPC-H failed on a memory limit of 180MB. The listed memory limit for Q12 is 125MB, so even with the 30MB buffer, 180MB is expected to work.
TestTpchMemLimitError.test_low_mem_limit_q12[mem_limit: 180 | exec_option:
{'disable_codegen': False, 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0, 'batch_size': 0, 'num_nodes': 0} | table_format: parquet/none]
[gw3] linux2 – Python 2.6.6 /data/jenkins/workspace/impala-umbrella-build-and-test-isilon/repos/Impala/bin/../infra/python/env/bin/python
query_test/test_mem_usage_scaling.py:169: in test_low_mem_limit_q12
self.low_memory_limit_test(vector, 'tpch-q12', self.MIN_MEM_FOR_TPCH['Q12'])
query_test/test_mem_usage_scaling.py:98: in low_memory_limit_test
self.run_test_case(tpch_query, new_vector)
common/impala_test_suite.py:359: in run_test_case
result = self.__execute_query(target_impalad_client, query, user=user)
common/impala_test_suite.py:567: in __execute_query
return impalad_client.execute(query, user=user)
common/impala_connection.py:160: in execute
return self.__beeswax_client.execute(sql_stmt, user=user)
beeswax/impala_beeswax.py:173: in execute
handle = self.__execute_query(query_string.strip(), user=user)
beeswax/impala_beeswax.py:339: in __execute_query
self.wait_for_completion(handle)
beeswax/impala_beeswax.py:359: in wait_for_completion
raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
ImpalaBeeswaxException: ImpalaBeeswaxException:
E Query aborted:Memory limit exceeded: Failed to allocate tuple buffer
E HDFS_SCAN_NODE (id=1) could not allocate 73.00 KB without exceeding limit.
E Error occurred on backend impala-boost-static-burst-slave-1b05.vpc.cloudera.com:22001 by fragment 5f4105e5cd83bc82:6a2304b00000004
E Memory left in process limit: 17.58 GB
E Memory left in query limit: 17.55 KB
E Query(5f4105e5cd83bc82:6a2304b00000000): Limit=180.00 MB Total=179.98 MB Peak=179.98 MB
E Fragment 5f4105e5cd83bc82:6a2304b00000001: Total=11.05 MB Peak=11.12 MB
E HDFS_SCAN_NODE (id=0): Total=10.96 MB Peak=11.03 MB
E DataStreamSender (dst_id=5): Total=50.23 KB Peak=66.23 KB
E CodeGen: Total=1.22 KB Peak=228.00 KB
E Block Manager: Limit=80.00 MB Total=24.50 MB Peak=24.50 MB
E Fragment 5f4105e5cd83bc82:6a2304b00000008: Total=26.29 MB Peak=26.93 MB
E SORT_NODE (id=4): Total=24.00 MB Peak=24.00 MB
E AGGREGATION_NODE (id=8): Total=2.27 MB Peak=2.27 MB
E Exprs: Total=4.00 KB Peak=4.00 KB
E EXCHANGE_NODE (id=7): Total=0 Peak=0
E DataStreamRecvr: Total=0 Peak=0
E DataStreamSender (dst_id=9): Total=3.88 KB Peak=3.88 KB
E CodeGen: Total=6.79 KB Peak=660.50 KB
E Fragment 5f4105e5cd83bc82:6a2304b00000006: Total=5.53 MB Peak=5.63 MB
E Runtime Filter Bank: Total=1.00 MB Peak=1.00 MB
E AGGREGATION_NODE (id=3): Total=1.28 MB Peak=1.28 MB
E Exprs: Total=12.00 KB Peak=12.00 KB
E HASH_JOIN_NODE (id=2): Total=1.09 MB Peak=1.16 MB
E Hash Join Builder (join_node_id=2): Total=1.01 MB Peak=1.01 MB
E EXCHANGE_NODE (id=5): Total=0 Peak=0
E DataStreamRecvr: Total=2.12 MB Peak=2.12 MB
E EXCHANGE_NODE (id=6): Total=0 Peak=0
E DataStreamRecvr: Total=1.75 KB Peak=100.62 KB
E DataStreamSender (dst_id=7): Total=7.75 KB Peak=7.75 KB
E CodeGen: Total=28.32 KB Peak=1.79 MB
E Fragment 5f4105e5cd83bc82:6a2304b00000004: Total=137.11 MB Peak=137.11 MB
E HDFS_SCAN_NODE (id=1): Total=129.00 MB Peak=129.00 MB
E Exprs: Total=20.00 KB Peak=20.00 KB
E DataStreamSender (dst_id=6): Total=75.50 KB Peak=123.50 KB
E CodeGen: Total=10.55 KB Peak=477.00 KB