Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Fixed
-
Impala 2.0, Impala 2.1, Impala 2.2
-
None
Description
When PartitionedAggregationNode::ProcessBatch() calls ConstructIntermediateTuple() and gets back a MEM_LIMIT_EXCEEDED error, we ignore the error and try to spill instead. We did this because in the original code, all errors along this path were ignored and when we fixed that in 2.2, we did not want to cause regressions in cases where spilling would reclaim memory such that the query could continue.
However, this is dangerous. We know that e.g. the BufferedBlockMgr state is inconsistent after it takes a RETURN_IF_ERROR() path (e.g. a block can end up "half pinned"). So, it's not safe to continue and this is likely leading to various issues later on during the query.
Note that the usual way that BufferedTupleStream::NewBlockForWrite() and BufferedBlockMgr::GetNewBlock() signal up the call stack that the operator needs to spill is by returning got_block=false / *block=NULL. Not by returning MEM_LIMIT_EXCEEDED.
Also note that PHJ does not drop MEM_LIMIT_EXCEEDED errors like this.
So, I think we should remove this line:
!process_batch_status_.IsMemLimitExceeded()) {
Attachments
Issue Links
- is a child of
-
IMPALA-2755 Clean up memory management in backend
- Resolved