Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
Description
- set drill.exec.memory.top.max in drill-override.conf to some low value (I used 75000000)
- disable hash aggregate (set plannder.enable_hashagg to false)
- disable exchanges (set plannder.disable_exchanges to true)
- run the following query
select count(*) from (select * from dfs.data.`tpch1/lineitem.parquet` order by l_orderkey);
and you should get the following error message:
Query failed: SYSTEM ERROR: null Fragment 0:0 [e05ff3c2-e130-449e-b721-b3442796e29b on 172.30.1.1:31010]
We have 2 problems here:
1st:
- ScanBatch detects that it can't allocate it's field value vectors and right before returning OUT_OF_MEMORY downstream it calls _clear() on the field vectors
- one of those vectors actually threw a NullPointerException in it's allocateNew() methods after it cleared it's buffer and couldn't allocate a new one
- when ScanBatch tries to clean that vector, it will throw a NullPointerException which will prevent the ScanBatch from returning OUT_OF_MEMORY and will cancel the query instead
2nd problem:
- once the query has been canceled, ScanBatch.cleanup() will throw another NullPointerException when cleaning the field vectors, which will prevent the cleanup of the remaining resources and will cause a memory leak
Attachments
Issue Links
- incorporates
-
DRILL-2894 FixedValueVectors shouldn't set it's data buffer to null when it fails to allocate it
- Resolved
- Is contained by
-
DRILL-2757 Verify operators correctly handle low memory conditions and cancellations
- Resolved